00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4089 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3679 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.098 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.099 The recommended git tool is: git 00:00:00.099 using credential 00000000-0000-0000-0000-000000000002 00:00:00.101 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.115 Fetching changes from the remote Git repository 00:00:00.118 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.141 Using shallow fetch with depth 1 00:00:00.141 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.141 > git --version # timeout=10 00:00:00.167 > git --version # 'git version 2.39.2' 00:00:00.167 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.209 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.209 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.677 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.688 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.699 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.699 > git config core.sparsecheckout # timeout=10 00:00:04.709 > git read-tree -mu HEAD # timeout=10 00:00:04.723 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.745 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.745 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.851 [Pipeline] Start of Pipeline 00:00:04.867 [Pipeline] library 00:00:04.869 Loading library shm_lib@master 00:00:04.869 Library shm_lib@master is cached. Copying from home. 00:00:04.886 [Pipeline] node 00:00:04.910 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:04.912 [Pipeline] { 00:00:04.921 [Pipeline] catchError 00:00:04.922 [Pipeline] { 00:00:04.934 [Pipeline] wrap 00:00:04.942 [Pipeline] { 00:00:04.950 [Pipeline] stage 00:00:04.952 [Pipeline] { (Prologue) 00:00:04.970 [Pipeline] echo 00:00:04.972 Node: VM-host-SM9 00:00:04.979 [Pipeline] cleanWs 00:00:04.989 [WS-CLEANUP] Deleting project workspace... 00:00:04.989 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.994 [WS-CLEANUP] done 00:00:05.211 [Pipeline] setCustomBuildProperty 00:00:05.293 [Pipeline] httpRequest 00:00:05.616 [Pipeline] echo 00:00:05.618 Sorcerer 10.211.164.20 is alive 00:00:05.626 [Pipeline] retry 00:00:05.627 [Pipeline] { 00:00:05.638 [Pipeline] httpRequest 00:00:05.642 HttpMethod: GET 00:00:05.643 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.643 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.645 Response Code: HTTP/1.1 200 OK 00:00:05.645 Success: Status code 200 is in the accepted range: 200,404 00:00:05.646 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.225 [Pipeline] } 00:00:06.244 [Pipeline] // retry 00:00:06.251 [Pipeline] sh 00:00:06.529 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.543 [Pipeline] httpRequest 00:00:06.843 [Pipeline] echo 00:00:06.844 Sorcerer 10.211.164.20 is alive 00:00:06.851 [Pipeline] retry 00:00:06.852 [Pipeline] { 00:00:06.863 [Pipeline] httpRequest 00:00:06.867 HttpMethod: GET 00:00:06.867 URL: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:06.868 Sending request to url: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:06.869 Response Code: HTTP/1.1 200 OK 00:00:06.869 Success: Status code 200 is in the accepted range: 200,404 00:00:06.870 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:35.123 [Pipeline] } 00:00:35.141 [Pipeline] // retry 00:00:35.149 [Pipeline] sh 00:00:35.429 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:38.726 [Pipeline] sh 00:00:39.008 + git -C spdk log --oneline -n5 00:00:39.008 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:00:39.008 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:00:39.008 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:00:39.008 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:00:39.008 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:00:39.029 [Pipeline] withCredentials 00:00:39.040 > git --version # timeout=10 00:00:39.053 > git --version # 'git version 2.39.2' 00:00:39.070 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:39.072 [Pipeline] { 00:00:39.082 [Pipeline] retry 00:00:39.085 [Pipeline] { 00:00:39.100 [Pipeline] sh 00:00:39.381 + git ls-remote http://dpdk.org/git/dpdk main 00:00:39.393 [Pipeline] } 00:00:39.417 [Pipeline] // retry 00:00:39.422 [Pipeline] } 00:00:39.443 [Pipeline] // withCredentials 00:00:39.454 [Pipeline] httpRequest 00:00:39.825 [Pipeline] echo 00:00:39.827 Sorcerer 10.211.164.20 is alive 00:00:39.855 [Pipeline] retry 00:00:39.858 [Pipeline] { 00:00:39.872 [Pipeline] httpRequest 00:00:39.877 HttpMethod: GET 00:00:39.878 URL: http://10.211.164.20/packages/dpdk_c0f5a9dd74f41688660e4ef84487a175ee44a54a.tar.gz 00:00:39.878 Sending request to url: http://10.211.164.20/packages/dpdk_c0f5a9dd74f41688660e4ef84487a175ee44a54a.tar.gz 00:00:39.884 Response Code: HTTP/1.1 200 OK 00:00:39.885 Success: Status code 200 is in the accepted range: 200,404 00:00:39.886 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_c0f5a9dd74f41688660e4ef84487a175ee44a54a.tar.gz 00:01:37.300 [Pipeline] } 00:01:37.321 [Pipeline] // retry 00:01:37.330 [Pipeline] sh 00:01:37.616 + tar --no-same-owner -xf dpdk_c0f5a9dd74f41688660e4ef84487a175ee44a54a.tar.gz 00:01:39.008 [Pipeline] sh 00:01:39.289 + git -C dpdk log --oneline -n5 00:01:39.289 c0f5a9dd74 doc: fix grammar and phrasing in multi-process app guide 00:01:39.289 b456bf5006 usertools/devbind: fix NUMA node display 00:01:39.289 828fe9de4c usertools/devbind: restore active marker 00:01:39.289 497cf54829 dts: remove nested html directory for API doc 00:01:39.289 4843aacb0d doc: describe send scheduling counters in mlx5 guide 00:01:39.307 [Pipeline] writeFile 00:01:39.322 [Pipeline] sh 00:01:39.605 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:39.617 [Pipeline] sh 00:01:39.898 + cat autorun-spdk.conf 00:01:39.898 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.898 SPDK_TEST_NVMF=1 00:01:39.898 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.898 SPDK_TEST_URING=1 00:01:39.898 SPDK_TEST_USDT=1 00:01:39.898 SPDK_RUN_UBSAN=1 00:01:39.898 NET_TYPE=virt 00:01:39.898 SPDK_TEST_NATIVE_DPDK=main 00:01:39.898 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:39.898 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.906 RUN_NIGHTLY=1 00:01:39.908 [Pipeline] } 00:01:39.922 [Pipeline] // stage 00:01:39.938 [Pipeline] stage 00:01:39.940 [Pipeline] { (Run VM) 00:01:39.953 [Pipeline] sh 00:01:40.233 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:40.233 + echo 'Start stage prepare_nvme.sh' 00:01:40.233 Start stage prepare_nvme.sh 00:01:40.233 + [[ -n 5 ]] 00:01:40.233 + disk_prefix=ex5 00:01:40.233 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:40.233 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:40.233 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:40.233 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.233 ++ SPDK_TEST_NVMF=1 00:01:40.233 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.233 ++ SPDK_TEST_URING=1 00:01:40.233 ++ SPDK_TEST_USDT=1 00:01:40.233 ++ SPDK_RUN_UBSAN=1 00:01:40.233 ++ NET_TYPE=virt 00:01:40.233 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:40.233 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:40.233 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.233 ++ RUN_NIGHTLY=1 00:01:40.233 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:40.233 + nvme_files=() 00:01:40.233 + declare -A nvme_files 00:01:40.233 + backend_dir=/var/lib/libvirt/images/backends 00:01:40.233 + nvme_files['nvme.img']=5G 00:01:40.233 + nvme_files['nvme-cmb.img']=5G 00:01:40.233 + nvme_files['nvme-multi0.img']=4G 00:01:40.233 + nvme_files['nvme-multi1.img']=4G 00:01:40.233 + nvme_files['nvme-multi2.img']=4G 00:01:40.234 + nvme_files['nvme-openstack.img']=8G 00:01:40.234 + nvme_files['nvme-zns.img']=5G 00:01:40.234 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:40.234 + (( SPDK_TEST_FTL == 1 )) 00:01:40.234 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:40.234 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:40.234 + for nvme in "${!nvme_files[@]}" 00:01:40.234 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:40.234 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.234 + for nvme in "${!nvme_files[@]}" 00:01:40.234 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:40.234 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.234 + for nvme in "${!nvme_files[@]}" 00:01:40.234 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:40.234 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:40.234 + for nvme in "${!nvme_files[@]}" 00:01:40.234 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:40.234 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.234 + for nvme in "${!nvme_files[@]}" 00:01:40.234 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:40.234 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.234 + for nvme in "${!nvme_files[@]}" 00:01:40.234 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:40.234 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.234 + for nvme in "${!nvme_files[@]}" 00:01:40.234 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:40.493 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.493 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:40.493 + echo 'End stage prepare_nvme.sh' 00:01:40.493 End stage prepare_nvme.sh 00:01:40.504 [Pipeline] sh 00:01:40.784 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:40.784 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:40.784 00:01:40.784 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:40.784 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:40.784 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:40.784 HELP=0 00:01:40.784 DRY_RUN=0 00:01:40.784 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:40.784 NVME_DISKS_TYPE=nvme,nvme, 00:01:40.784 NVME_AUTO_CREATE=0 00:01:40.784 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:40.784 NVME_CMB=,, 00:01:40.784 NVME_PMR=,, 00:01:40.784 NVME_ZNS=,, 00:01:40.784 NVME_MS=,, 00:01:40.784 NVME_FDP=,, 00:01:40.784 SPDK_VAGRANT_DISTRO=fedora39 00:01:40.784 SPDK_VAGRANT_VMCPU=10 00:01:40.784 SPDK_VAGRANT_VMRAM=12288 00:01:40.784 SPDK_VAGRANT_PROVIDER=libvirt 00:01:40.784 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:40.784 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:40.784 SPDK_OPENSTACK_NETWORK=0 00:01:40.784 VAGRANT_PACKAGE_BOX=0 00:01:40.784 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:40.784 FORCE_DISTRO=true 00:01:40.784 VAGRANT_BOX_VERSION= 00:01:40.784 EXTRA_VAGRANTFILES= 00:01:40.784 NIC_MODEL=e1000 00:01:40.784 00:01:40.784 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:40.784 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:44.071 Bringing machine 'default' up with 'libvirt' provider... 00:01:44.329 ==> default: Creating image (snapshot of base box volume). 00:01:44.329 ==> default: Creating domain with the following settings... 00:01:44.329 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732898227_6be15461994d985b9e06 00:01:44.329 ==> default: -- Domain type: kvm 00:01:44.329 ==> default: -- Cpus: 10 00:01:44.329 ==> default: -- Feature: acpi 00:01:44.329 ==> default: -- Feature: apic 00:01:44.329 ==> default: -- Feature: pae 00:01:44.329 ==> default: -- Memory: 12288M 00:01:44.329 ==> default: -- Memory Backing: hugepages: 00:01:44.329 ==> default: -- Management MAC: 00:01:44.329 ==> default: -- Loader: 00:01:44.329 ==> default: -- Nvram: 00:01:44.330 ==> default: -- Base box: spdk/fedora39 00:01:44.330 ==> default: -- Storage pool: default 00:01:44.330 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732898227_6be15461994d985b9e06.img (20G) 00:01:44.330 ==> default: -- Volume Cache: default 00:01:44.330 ==> default: -- Kernel: 00:01:44.330 ==> default: -- Initrd: 00:01:44.330 ==> default: -- Graphics Type: vnc 00:01:44.330 ==> default: -- Graphics Port: -1 00:01:44.330 ==> default: -- Graphics IP: 127.0.0.1 00:01:44.330 ==> default: -- Graphics Password: Not defined 00:01:44.330 ==> default: -- Video Type: cirrus 00:01:44.330 ==> default: -- Video VRAM: 9216 00:01:44.330 ==> default: -- Sound Type: 00:01:44.330 ==> default: -- Keymap: en-us 00:01:44.330 ==> default: -- TPM Path: 00:01:44.330 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:44.330 ==> default: -- Command line args: 00:01:44.330 ==> default: -> value=-device, 00:01:44.330 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:44.330 ==> default: -> value=-drive, 00:01:44.330 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:44.330 ==> default: -> value=-device, 00:01:44.330 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.330 ==> default: -> value=-device, 00:01:44.330 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:44.330 ==> default: -> value=-drive, 00:01:44.330 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:44.330 ==> default: -> value=-device, 00:01:44.330 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.330 ==> default: -> value=-drive, 00:01:44.330 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:44.330 ==> default: -> value=-device, 00:01:44.330 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.330 ==> default: -> value=-drive, 00:01:44.330 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:44.330 ==> default: -> value=-device, 00:01:44.330 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.589 ==> default: Creating shared folders metadata... 00:01:44.589 ==> default: Starting domain. 00:01:45.966 ==> default: Waiting for domain to get an IP address... 00:02:04.129 ==> default: Waiting for SSH to become available... 00:02:04.129 ==> default: Configuring and enabling network interfaces... 00:02:06.702 default: SSH address: 192.168.121.197:22 00:02:06.702 default: SSH username: vagrant 00:02:06.702 default: SSH auth method: private key 00:02:08.604 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:15.167 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:21.744 ==> default: Mounting SSHFS shared folder... 00:02:22.707 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:22.707 ==> default: Checking Mount.. 00:02:24.084 ==> default: Folder Successfully Mounted! 00:02:24.084 ==> default: Running provisioner: file... 00:02:24.651 default: ~/.gitconfig => .gitconfig 00:02:25.218 00:02:25.218 SUCCESS! 00:02:25.218 00:02:25.218 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:25.218 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:25.218 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:25.218 00:02:25.227 [Pipeline] } 00:02:25.243 [Pipeline] // stage 00:02:25.252 [Pipeline] dir 00:02:25.253 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:25.254 [Pipeline] { 00:02:25.267 [Pipeline] catchError 00:02:25.269 [Pipeline] { 00:02:25.281 [Pipeline] sh 00:02:25.560 + vagrant ssh-config --host vagrant 00:02:25.560 + sed -ne /^Host/,$p 00:02:25.560 + tee ssh_conf 00:02:29.750 Host vagrant 00:02:29.750 HostName 192.168.121.197 00:02:29.750 User vagrant 00:02:29.750 Port 22 00:02:29.750 UserKnownHostsFile /dev/null 00:02:29.750 StrictHostKeyChecking no 00:02:29.750 PasswordAuthentication no 00:02:29.750 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:29.750 IdentitiesOnly yes 00:02:29.750 LogLevel FATAL 00:02:29.750 ForwardAgent yes 00:02:29.750 ForwardX11 yes 00:02:29.750 00:02:29.764 [Pipeline] withEnv 00:02:29.766 [Pipeline] { 00:02:29.780 [Pipeline] sh 00:02:30.060 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:30.060 source /etc/os-release 00:02:30.060 [[ -e /image.version ]] && img=$(< /image.version) 00:02:30.060 # Minimal, systemd-like check. 00:02:30.060 if [[ -e /.dockerenv ]]; then 00:02:30.060 # Clear garbage from the node's name: 00:02:30.060 # agt-er_autotest_547-896 -> autotest_547-896 00:02:30.060 # $HOSTNAME is the actual container id 00:02:30.060 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:30.060 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:30.060 # We can assume this is a mount from a host where container is running, 00:02:30.060 # so fetch its hostname to easily identify the target swarm worker. 00:02:30.060 container="$(< /etc/hostname) ($agent)" 00:02:30.060 else 00:02:30.060 # Fallback 00:02:30.060 container=$agent 00:02:30.060 fi 00:02:30.060 fi 00:02:30.060 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:30.060 00:02:30.331 [Pipeline] } 00:02:30.347 [Pipeline] // withEnv 00:02:30.356 [Pipeline] setCustomBuildProperty 00:02:30.370 [Pipeline] stage 00:02:30.372 [Pipeline] { (Tests) 00:02:30.389 [Pipeline] sh 00:02:30.669 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:30.942 [Pipeline] sh 00:02:31.223 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:31.493 [Pipeline] timeout 00:02:31.494 Timeout set to expire in 1 hr 0 min 00:02:31.495 [Pipeline] { 00:02:31.508 [Pipeline] sh 00:02:31.801 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:32.397 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:32.410 [Pipeline] sh 00:02:32.691 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:32.964 [Pipeline] sh 00:02:33.244 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:33.262 [Pipeline] sh 00:02:33.543 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:33.802 ++ readlink -f spdk_repo 00:02:33.802 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:33.802 + [[ -n /home/vagrant/spdk_repo ]] 00:02:33.802 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:33.802 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:33.802 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:33.802 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:33.802 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:33.802 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:33.802 + cd /home/vagrant/spdk_repo 00:02:33.802 + source /etc/os-release 00:02:33.802 ++ NAME='Fedora Linux' 00:02:33.802 ++ VERSION='39 (Cloud Edition)' 00:02:33.802 ++ ID=fedora 00:02:33.802 ++ VERSION_ID=39 00:02:33.802 ++ VERSION_CODENAME= 00:02:33.802 ++ PLATFORM_ID=platform:f39 00:02:33.802 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:33.802 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:33.802 ++ LOGO=fedora-logo-icon 00:02:33.802 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:33.802 ++ HOME_URL=https://fedoraproject.org/ 00:02:33.802 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:33.802 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:33.802 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:33.802 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:33.802 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:33.802 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:33.802 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:33.802 ++ SUPPORT_END=2024-11-12 00:02:33.802 ++ VARIANT='Cloud Edition' 00:02:33.802 ++ VARIANT_ID=cloud 00:02:33.802 + uname -a 00:02:33.802 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:33.802 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:34.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:34.061 Hugepages 00:02:34.061 node hugesize free / total 00:02:34.061 node0 1048576kB 0 / 0 00:02:34.061 node0 2048kB 0 / 0 00:02:34.061 00:02:34.061 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:34.320 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:34.320 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:34.320 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:34.320 + rm -f /tmp/spdk-ld-path 00:02:34.320 + source autorun-spdk.conf 00:02:34.320 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:34.320 ++ SPDK_TEST_NVMF=1 00:02:34.320 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:34.320 ++ SPDK_TEST_URING=1 00:02:34.320 ++ SPDK_TEST_USDT=1 00:02:34.320 ++ SPDK_RUN_UBSAN=1 00:02:34.320 ++ NET_TYPE=virt 00:02:34.320 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:34.320 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:34.320 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:34.320 ++ RUN_NIGHTLY=1 00:02:34.320 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:34.320 + [[ -n '' ]] 00:02:34.320 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:34.320 + for M in /var/spdk/build-*-manifest.txt 00:02:34.320 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:34.320 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:34.320 + for M in /var/spdk/build-*-manifest.txt 00:02:34.320 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:34.320 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:34.320 + for M in /var/spdk/build-*-manifest.txt 00:02:34.320 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:34.320 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:34.320 ++ uname 00:02:34.320 + [[ Linux == \L\i\n\u\x ]] 00:02:34.320 + sudo dmesg -T 00:02:34.320 + sudo dmesg --clear 00:02:34.320 + dmesg_pid=5988 00:02:34.320 + [[ Fedora Linux == FreeBSD ]] 00:02:34.320 + sudo dmesg -Tw 00:02:34.320 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:34.320 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:34.320 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:34.320 + [[ -x /usr/src/fio-static/fio ]] 00:02:34.320 + export FIO_BIN=/usr/src/fio-static/fio 00:02:34.320 + FIO_BIN=/usr/src/fio-static/fio 00:02:34.320 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:34.320 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:34.320 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:34.320 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:34.320 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:34.320 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:34.320 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:34.320 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:34.320 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:34.320 16:37:58 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:34.320 16:37:58 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:34.320 16:37:58 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:34.320 16:37:58 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:34.320 16:37:58 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:34.320 16:37:58 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:34.320 16:37:58 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:34.320 16:37:58 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:34.320 16:37:58 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:34.320 16:37:58 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NATIVE_DPDK=main 00:02:34.320 16:37:58 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:34.320 16:37:58 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:34.320 16:37:58 -- spdk_repo/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:34.320 16:37:58 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:34.320 16:37:58 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:34.579 16:37:58 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:34.579 16:37:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:34.579 16:37:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:34.579 16:37:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:34.579 16:37:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:34.579 16:37:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:34.579 16:37:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.579 16:37:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.579 16:37:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.579 16:37:58 -- paths/export.sh@5 -- $ export PATH 00:02:34.579 16:37:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.579 16:37:58 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:34.579 16:37:58 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:34.579 16:37:58 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732898278.XXXXXX 00:02:34.579 16:37:58 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732898278.lrP1j7 00:02:34.579 16:37:58 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:34.579 16:37:58 -- common/autobuild_common.sh@499 -- $ '[' -n main ']' 00:02:34.579 16:37:58 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:34.579 16:37:58 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:34.579 16:37:58 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:34.579 16:37:58 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:34.579 16:37:58 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:34.579 16:37:58 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:34.579 16:37:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.579 16:37:58 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:34.579 16:37:58 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:34.579 16:37:58 -- pm/common@17 -- $ local monitor 00:02:34.579 16:37:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.579 16:37:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.579 16:37:58 -- pm/common@25 -- $ sleep 1 00:02:34.579 16:37:58 -- pm/common@21 -- $ date +%s 00:02:34.579 16:37:58 -- pm/common@21 -- $ date +%s 00:02:34.579 16:37:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732898278 00:02:34.579 16:37:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732898278 00:02:34.579 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732898278_collect-vmstat.pm.log 00:02:34.579 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732898278_collect-cpu-load.pm.log 00:02:35.515 16:37:59 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:35.515 16:37:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:35.515 16:37:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:35.515 16:37:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:35.515 16:37:59 -- spdk/autobuild.sh@16 -- $ date -u 00:02:35.515 Fri Nov 29 04:37:59 PM UTC 2024 00:02:35.515 16:37:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:35.515 v25.01-pre-276-g35cd3e84d 00:02:35.515 16:37:59 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:35.515 16:37:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:35.515 16:37:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:35.515 16:37:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:35.515 16:37:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:35.515 16:37:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.515 ************************************ 00:02:35.515 START TEST ubsan 00:02:35.515 ************************************ 00:02:35.515 using ubsan 00:02:35.515 16:37:59 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:35.515 00:02:35.515 real 0m0.000s 00:02:35.515 user 0m0.000s 00:02:35.515 sys 0m0.000s 00:02:35.515 16:37:59 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:35.515 16:37:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:35.515 ************************************ 00:02:35.515 END TEST ubsan 00:02:35.515 ************************************ 00:02:35.515 16:37:59 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:35.515 16:37:59 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:35.515 16:37:59 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:35.515 16:37:59 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:35.515 16:37:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:35.515 16:37:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.515 ************************************ 00:02:35.515 START TEST build_native_dpdk 00:02:35.515 ************************************ 00:02:35.515 16:37:59 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:35.515 16:37:59 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:35.516 16:37:59 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:35.775 c0f5a9dd74 doc: fix grammar and phrasing in multi-process app guide 00:02:35.775 b456bf5006 usertools/devbind: fix NUMA node display 00:02:35.775 828fe9de4c usertools/devbind: restore active marker 00:02:35.775 497cf54829 dts: remove nested html directory for API doc 00:02:35.775 4843aacb0d doc: describe send scheduling counters in mlx5 guide 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc4 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc4 21.11.0 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc4 '<' 21.11.0 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:35.775 16:37:59 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:35.775 16:37:59 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:35.775 patching file config/rte_config.h 00:02:35.776 Hunk #1 succeeded at 72 (offset 13 lines). 00:02:35.776 16:37:59 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 24.11.0-rc4 24.07.0 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc4 '<' 24.07.0 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:35.776 16:37:59 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 24.11.0-rc4 24.07.0 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc4 '>=' 24.07.0 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:35.776 16:37:59 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:35.776 16:37:59 build_native_dpdk -- common/autobuild_common.sh@187 -- $ patch -p1 00:02:35.776 patching file drivers/bus/pci/linux/pci_uio.c 00:02:35.776 16:37:59 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:35.776 16:37:59 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:35.776 16:37:59 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:35.776 16:37:59 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:35.776 16:37:59 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:41.047 The Meson build system 00:02:41.047 Version: 1.5.0 00:02:41.047 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:41.047 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:41.047 Build type: native build 00:02:41.047 Project name: DPDK 00:02:41.047 Project version: 24.11.0-rc4 00:02:41.047 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:41.047 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:41.047 Host machine cpu family: x86_64 00:02:41.047 Host machine cpu: x86_64 00:02:41.047 Message: ## Building in Developer Mode ## 00:02:41.047 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:41.047 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:41.047 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:41.047 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:41.047 Program cat found: YES (/usr/bin/cat) 00:02:41.047 config/meson.build:122: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:41.047 Compiler for C supports arguments -march=native: YES 00:02:41.047 Checking for size of "void *" : 8 00:02:41.047 Checking for size of "void *" : 8 (cached) 00:02:41.047 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:41.047 Library m found: YES 00:02:41.047 Library numa found: YES 00:02:41.047 Has header "numaif.h" : YES 00:02:41.047 Library fdt found: NO 00:02:41.047 Library execinfo found: NO 00:02:41.047 Has header "execinfo.h" : YES 00:02:41.047 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:41.047 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:41.047 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:41.047 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:41.047 Run-time dependency openssl found: YES 3.1.1 00:02:41.047 Run-time dependency libpcap found: YES 1.10.4 00:02:41.047 Has header "pcap.h" with dependency libpcap: YES 00:02:41.047 Compiler for C supports arguments -Wcast-qual: YES 00:02:41.047 Compiler for C supports arguments -Wdeprecated: YES 00:02:41.047 Compiler for C supports arguments -Wformat: YES 00:02:41.047 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:41.047 Compiler for C supports arguments -Wformat-security: NO 00:02:41.047 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:41.047 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:41.047 Compiler for C supports arguments -Wnested-externs: YES 00:02:41.047 Compiler for C supports arguments -Wold-style-definition: YES 00:02:41.047 Compiler for C supports arguments -Wpointer-arith: YES 00:02:41.047 Compiler for C supports arguments -Wsign-compare: YES 00:02:41.047 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:41.047 Compiler for C supports arguments -Wundef: YES 00:02:41.047 Compiler for C supports arguments -Wwrite-strings: YES 00:02:41.047 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:41.047 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:41.047 Program objdump found: YES (/usr/bin/objdump) 00:02:41.047 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:02:41.047 Checking if "AVX512 checking" compiles: YES 00:02:41.047 Fetching value of define "__AVX512F__" : (undefined) 00:02:41.047 Fetching value of define "__SSE4_2__" : 1 00:02:41.047 Fetching value of define "__AES__" : 1 00:02:41.047 Fetching value of define "__AVX__" : 1 00:02:41.047 Fetching value of define "__AVX2__" : 1 00:02:41.047 Fetching value of define "__AVX512BW__" : (undefined) 00:02:41.047 Fetching value of define "__AVX512CD__" : (undefined) 00:02:41.047 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:41.047 Fetching value of define "__AVX512F__" : (undefined) 00:02:41.047 Fetching value of define "__AVX512VL__" : (undefined) 00:02:41.047 Fetching value of define "__PCLMUL__" : 1 00:02:41.047 Fetching value of define "__RDRND__" : 1 00:02:41.047 Fetching value of define "__RDSEED__" : 1 00:02:41.047 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:41.047 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:41.047 Message: lib/log: Defining dependency "log" 00:02:41.047 Message: lib/kvargs: Defining dependency "kvargs" 00:02:41.047 Message: lib/argparse: Defining dependency "argparse" 00:02:41.047 Message: lib/telemetry: Defining dependency "telemetry" 00:02:41.047 Checking for function "pthread_attr_setaffinity_np" : YES 00:02:41.047 Checking for function "getentropy" : NO 00:02:41.047 Message: lib/eal: Defining dependency "eal" 00:02:41.047 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:41.047 Message: lib/ring: Defining dependency "ring" 00:02:41.047 Message: lib/rcu: Defining dependency "rcu" 00:02:41.047 Message: lib/mempool: Defining dependency "mempool" 00:02:41.047 Message: lib/mbuf: Defining dependency "mbuf" 00:02:41.047 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:41.047 Compiler for C supports arguments -mpclmul: YES 00:02:41.047 Compiler for C supports arguments -maes: YES 00:02:41.047 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:41.047 Message: lib/net: Defining dependency "net" 00:02:41.047 Message: lib/meter: Defining dependency "meter" 00:02:41.047 Message: lib/ethdev: Defining dependency "ethdev" 00:02:41.047 Message: lib/pci: Defining dependency "pci" 00:02:41.047 Message: lib/cmdline: Defining dependency "cmdline" 00:02:41.047 Message: lib/metrics: Defining dependency "metrics" 00:02:41.047 Message: lib/hash: Defining dependency "hash" 00:02:41.047 Message: lib/timer: Defining dependency "timer" 00:02:41.047 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:41.047 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:41.047 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:41.047 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:41.047 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:41.047 Message: lib/acl: Defining dependency "acl" 00:02:41.047 Message: lib/bbdev: Defining dependency "bbdev" 00:02:41.047 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:41.047 Run-time dependency libelf found: YES 0.191 00:02:41.047 Message: lib/bpf: Defining dependency "bpf" 00:02:41.047 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:41.047 Message: lib/compressdev: Defining dependency "compressdev" 00:02:41.047 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:41.047 Message: lib/distributor: Defining dependency "distributor" 00:02:41.047 Message: lib/dmadev: Defining dependency "dmadev" 00:02:41.047 Message: lib/efd: Defining dependency "efd" 00:02:41.047 Message: lib/eventdev: Defining dependency "eventdev" 00:02:41.047 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:41.047 Message: lib/gpudev: Defining dependency "gpudev" 00:02:41.047 Message: lib/gro: Defining dependency "gro" 00:02:41.047 Message: lib/gso: Defining dependency "gso" 00:02:41.047 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:41.047 Message: lib/jobstats: Defining dependency "jobstats" 00:02:41.047 Message: lib/latencystats: Defining dependency "latencystats" 00:02:41.047 Message: lib/lpm: Defining dependency "lpm" 00:02:41.047 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:41.047 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:41.047 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:41.047 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:41.047 Message: lib/member: Defining dependency "member" 00:02:41.047 Message: lib/pcapng: Defining dependency "pcapng" 00:02:41.047 Message: lib/power: Defining dependency "power" 00:02:41.047 Message: lib/rawdev: Defining dependency "rawdev" 00:02:41.047 Message: lib/regexdev: Defining dependency "regexdev" 00:02:41.047 Message: lib/mldev: Defining dependency "mldev" 00:02:41.047 Message: lib/rib: Defining dependency "rib" 00:02:41.047 Message: lib/reorder: Defining dependency "reorder" 00:02:41.047 Message: lib/sched: Defining dependency "sched" 00:02:41.047 Message: lib/security: Defining dependency "security" 00:02:41.047 Message: lib/stack: Defining dependency "stack" 00:02:41.047 Has header "linux/userfaultfd.h" : YES 00:02:41.047 Has header "linux/vduse.h" : YES 00:02:41.047 Message: lib/vhost: Defining dependency "vhost" 00:02:41.047 Message: lib/ipsec: Defining dependency "ipsec" 00:02:41.047 Message: lib/pdcp: Defining dependency "pdcp" 00:02:41.047 Message: lib/fib: Defining dependency "fib" 00:02:41.047 Message: lib/port: Defining dependency "port" 00:02:41.047 Message: lib/pdump: Defining dependency "pdump" 00:02:41.047 Message: lib/table: Defining dependency "table" 00:02:41.047 Message: lib/pipeline: Defining dependency "pipeline" 00:02:41.047 Message: lib/graph: Defining dependency "graph" 00:02:41.047 Message: lib/node: Defining dependency "node" 00:02:41.047 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:41.047 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:41.047 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:41.047 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:41.047 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:41.047 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:41.047 Compiler for C supports arguments -Wno-unused-value: YES 00:02:41.047 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:41.047 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:41.047 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:41.047 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:41.047 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:41.047 Message: drivers/power/acpi: Defining dependency "power_acpi" 00:02:41.985 Message: drivers/power/amd_pstate: Defining dependency "power_amd_pstate" 00:02:41.985 Message: drivers/power/cppc: Defining dependency "power_cppc" 00:02:41.985 Message: drivers/power/intel_pstate: Defining dependency "power_intel_pstate" 00:02:41.985 Message: drivers/power/intel_uncore: Defining dependency "power_intel_uncore" 00:02:41.985 Message: drivers/power/kvm_vm: Defining dependency "power_kvm_vm" 00:02:41.985 Has header "sys/epoll.h" : YES 00:02:41.985 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:41.985 Configuring doxy-api-html.conf using configuration 00:02:41.985 Configuring doxy-api-man.conf using configuration 00:02:41.985 Program mandb found: YES (/usr/bin/mandb) 00:02:41.985 Program sphinx-build found: NO 00:02:41.985 Program sphinx-build found: NO 00:02:41.985 Configuring rte_build_config.h using configuration 00:02:41.985 Message: 00:02:41.985 ================= 00:02:41.985 Applications Enabled 00:02:41.985 ================= 00:02:41.985 00:02:41.985 apps: 00:02:41.985 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:41.985 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:41.985 test-pmd, test-regex, test-sad, test-security-perf, 00:02:41.985 00:02:41.985 Message: 00:02:41.985 ================= 00:02:41.985 Libraries Enabled 00:02:41.985 ================= 00:02:41.985 00:02:41.985 libs: 00:02:41.985 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:41.985 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:41.985 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:41.985 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:41.985 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:41.985 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:41.985 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:41.985 graph, node, 00:02:41.985 00:02:41.985 Message: 00:02:41.985 =============== 00:02:41.985 Drivers Enabled 00:02:41.985 =============== 00:02:41.985 00:02:41.985 common: 00:02:41.985 00:02:41.985 bus: 00:02:41.985 pci, vdev, 00:02:41.985 mempool: 00:02:41.985 ring, 00:02:41.985 dma: 00:02:41.985 00:02:41.985 net: 00:02:41.985 i40e, 00:02:41.985 raw: 00:02:41.985 00:02:41.985 crypto: 00:02:41.985 00:02:41.985 compress: 00:02:41.985 00:02:41.985 regex: 00:02:41.985 00:02:41.985 ml: 00:02:41.985 00:02:41.985 vdpa: 00:02:41.985 00:02:41.985 event: 00:02:41.985 00:02:41.985 baseband: 00:02:41.985 00:02:41.985 gpu: 00:02:41.985 00:02:41.985 power: 00:02:41.985 acpi, amd_pstate, cppc, intel_pstate, intel_uncore, kvm_vm, 00:02:41.985 00:02:41.985 Message: 00:02:41.985 ================= 00:02:41.985 Content Skipped 00:02:41.985 ================= 00:02:41.985 00:02:41.985 apps: 00:02:41.985 00:02:41.985 libs: 00:02:41.985 00:02:41.985 drivers: 00:02:41.985 common/cpt: not in enabled drivers build config 00:02:41.985 common/dpaax: not in enabled drivers build config 00:02:41.985 common/iavf: not in enabled drivers build config 00:02:41.985 common/idpf: not in enabled drivers build config 00:02:41.985 common/ionic: not in enabled drivers build config 00:02:41.985 common/mvep: not in enabled drivers build config 00:02:41.985 common/octeontx: not in enabled drivers build config 00:02:41.985 bus/auxiliary: not in enabled drivers build config 00:02:41.985 bus/cdx: not in enabled drivers build config 00:02:41.985 bus/dpaa: not in enabled drivers build config 00:02:41.985 bus/fslmc: not in enabled drivers build config 00:02:41.985 bus/ifpga: not in enabled drivers build config 00:02:41.985 bus/platform: not in enabled drivers build config 00:02:41.985 bus/uacce: not in enabled drivers build config 00:02:41.985 bus/vmbus: not in enabled drivers build config 00:02:41.985 common/cnxk: not in enabled drivers build config 00:02:41.985 common/mlx5: not in enabled drivers build config 00:02:41.985 common/nfp: not in enabled drivers build config 00:02:41.985 common/nitrox: not in enabled drivers build config 00:02:41.985 common/qat: not in enabled drivers build config 00:02:41.985 common/sfc_efx: not in enabled drivers build config 00:02:41.985 mempool/bucket: not in enabled drivers build config 00:02:41.985 mempool/cnxk: not in enabled drivers build config 00:02:41.985 mempool/dpaa: not in enabled drivers build config 00:02:41.985 mempool/dpaa2: not in enabled drivers build config 00:02:41.985 mempool/octeontx: not in enabled drivers build config 00:02:41.985 mempool/stack: not in enabled drivers build config 00:02:41.985 dma/cnxk: not in enabled drivers build config 00:02:41.985 dma/dpaa: not in enabled drivers build config 00:02:41.985 dma/dpaa2: not in enabled drivers build config 00:02:41.985 dma/hisilicon: not in enabled drivers build config 00:02:41.985 dma/idxd: not in enabled drivers build config 00:02:41.985 dma/ioat: not in enabled drivers build config 00:02:41.985 dma/odm: not in enabled drivers build config 00:02:41.985 dma/skeleton: not in enabled drivers build config 00:02:41.985 net/af_packet: not in enabled drivers build config 00:02:41.985 net/af_xdp: not in enabled drivers build config 00:02:41.985 net/ark: not in enabled drivers build config 00:02:41.985 net/atlantic: not in enabled drivers build config 00:02:41.985 net/avp: not in enabled drivers build config 00:02:41.985 net/axgbe: not in enabled drivers build config 00:02:41.985 net/bnx2x: not in enabled drivers build config 00:02:41.985 net/bnxt: not in enabled drivers build config 00:02:41.985 net/bonding: not in enabled drivers build config 00:02:41.985 net/cnxk: not in enabled drivers build config 00:02:41.985 net/cpfl: not in enabled drivers build config 00:02:41.985 net/cxgbe: not in enabled drivers build config 00:02:41.986 net/dpaa: not in enabled drivers build config 00:02:41.986 net/dpaa2: not in enabled drivers build config 00:02:41.986 net/e1000: not in enabled drivers build config 00:02:41.986 net/ena: not in enabled drivers build config 00:02:41.986 net/enetc: not in enabled drivers build config 00:02:41.986 net/enetfec: not in enabled drivers build config 00:02:41.986 net/enic: not in enabled drivers build config 00:02:41.986 net/failsafe: not in enabled drivers build config 00:02:41.986 net/fm10k: not in enabled drivers build config 00:02:41.986 net/gve: not in enabled drivers build config 00:02:41.986 net/hinic: not in enabled drivers build config 00:02:41.986 net/hns3: not in enabled drivers build config 00:02:41.986 net/iavf: not in enabled drivers build config 00:02:41.986 net/ice: not in enabled drivers build config 00:02:41.986 net/idpf: not in enabled drivers build config 00:02:41.986 net/igc: not in enabled drivers build config 00:02:41.986 net/ionic: not in enabled drivers build config 00:02:41.986 net/ipn3ke: not in enabled drivers build config 00:02:41.986 net/ixgbe: not in enabled drivers build config 00:02:41.986 net/mana: not in enabled drivers build config 00:02:41.986 net/memif: not in enabled drivers build config 00:02:41.986 net/mlx4: not in enabled drivers build config 00:02:41.986 net/mlx5: not in enabled drivers build config 00:02:41.986 net/mvneta: not in enabled drivers build config 00:02:41.986 net/mvpp2: not in enabled drivers build config 00:02:41.986 net/netvsc: not in enabled drivers build config 00:02:41.986 net/nfb: not in enabled drivers build config 00:02:41.986 net/nfp: not in enabled drivers build config 00:02:41.986 net/ngbe: not in enabled drivers build config 00:02:41.986 net/ntnic: not in enabled drivers build config 00:02:41.986 net/null: not in enabled drivers build config 00:02:41.986 net/octeontx: not in enabled drivers build config 00:02:41.986 net/octeon_ep: not in enabled drivers build config 00:02:41.986 net/pcap: not in enabled drivers build config 00:02:41.986 net/pfe: not in enabled drivers build config 00:02:41.986 net/qede: not in enabled drivers build config 00:02:41.986 net/r8169: not in enabled drivers build config 00:02:41.986 net/ring: not in enabled drivers build config 00:02:41.986 net/sfc: not in enabled drivers build config 00:02:41.986 net/softnic: not in enabled drivers build config 00:02:41.986 net/tap: not in enabled drivers build config 00:02:41.986 net/thunderx: not in enabled drivers build config 00:02:41.986 net/txgbe: not in enabled drivers build config 00:02:41.986 net/vdev_netvsc: not in enabled drivers build config 00:02:41.986 net/vhost: not in enabled drivers build config 00:02:41.986 net/virtio: not in enabled drivers build config 00:02:41.986 net/vmxnet3: not in enabled drivers build config 00:02:41.986 net/zxdh: not in enabled drivers build config 00:02:41.986 raw/cnxk_bphy: not in enabled drivers build config 00:02:41.986 raw/cnxk_gpio: not in enabled drivers build config 00:02:41.986 raw/cnxk_rvu_lf: not in enabled drivers build config 00:02:41.986 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:41.986 raw/gdtc: not in enabled drivers build config 00:02:41.986 raw/ifpga: not in enabled drivers build config 00:02:41.986 raw/ntb: not in enabled drivers build config 00:02:41.986 raw/skeleton: not in enabled drivers build config 00:02:41.986 crypto/armv8: not in enabled drivers build config 00:02:41.986 crypto/bcmfs: not in enabled drivers build config 00:02:41.986 crypto/caam_jr: not in enabled drivers build config 00:02:41.986 crypto/ccp: not in enabled drivers build config 00:02:41.986 crypto/cnxk: not in enabled drivers build config 00:02:41.986 crypto/dpaa_sec: not in enabled drivers build config 00:02:41.986 crypto/dpaa2_sec: not in enabled drivers build config 00:02:41.986 crypto/ionic: not in enabled drivers build config 00:02:41.986 crypto/ipsec_mb: not in enabled drivers build config 00:02:41.986 crypto/mlx5: not in enabled drivers build config 00:02:41.986 crypto/mvsam: not in enabled drivers build config 00:02:41.986 crypto/nitrox: not in enabled drivers build config 00:02:41.986 crypto/null: not in enabled drivers build config 00:02:41.986 crypto/octeontx: not in enabled drivers build config 00:02:41.986 crypto/openssl: not in enabled drivers build config 00:02:41.986 crypto/scheduler: not in enabled drivers build config 00:02:41.986 crypto/uadk: not in enabled drivers build config 00:02:41.986 crypto/virtio: not in enabled drivers build config 00:02:41.986 compress/isal: not in enabled drivers build config 00:02:41.986 compress/mlx5: not in enabled drivers build config 00:02:41.986 compress/nitrox: not in enabled drivers build config 00:02:41.986 compress/octeontx: not in enabled drivers build config 00:02:41.986 compress/uadk: not in enabled drivers build config 00:02:41.986 compress/zlib: not in enabled drivers build config 00:02:41.986 regex/mlx5: not in enabled drivers build config 00:02:41.986 regex/cn9k: not in enabled drivers build config 00:02:41.986 ml/cnxk: not in enabled drivers build config 00:02:41.986 vdpa/ifc: not in enabled drivers build config 00:02:41.986 vdpa/mlx5: not in enabled drivers build config 00:02:41.986 vdpa/nfp: not in enabled drivers build config 00:02:41.986 vdpa/sfc: not in enabled drivers build config 00:02:41.986 event/cnxk: not in enabled drivers build config 00:02:41.986 event/dlb2: not in enabled drivers build config 00:02:41.986 event/dpaa: not in enabled drivers build config 00:02:41.986 event/dpaa2: not in enabled drivers build config 00:02:41.986 event/dsw: not in enabled drivers build config 00:02:41.986 event/opdl: not in enabled drivers build config 00:02:41.986 event/skeleton: not in enabled drivers build config 00:02:41.986 event/sw: not in enabled drivers build config 00:02:41.986 event/octeontx: not in enabled drivers build config 00:02:41.986 baseband/acc: not in enabled drivers build config 00:02:41.986 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:41.986 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:41.986 baseband/la12xx: not in enabled drivers build config 00:02:41.986 baseband/null: not in enabled drivers build config 00:02:41.986 baseband/turbo_sw: not in enabled drivers build config 00:02:41.986 gpu/cuda: not in enabled drivers build config 00:02:41.986 power/amd_uncore: not in enabled drivers build config 00:02:41.986 00:02:41.986 00:02:41.986 Message: DPDK build config complete: 00:02:41.986 source path = "/home/vagrant/spdk_repo/dpdk" 00:02:41.986 build path = "/home/vagrant/spdk_repo/dpdk/build-tmp" 00:02:41.986 Build targets in project: 249 00:02:41.986 00:02:41.986 DPDK 24.11.0-rc4 00:02:41.986 00:02:41.986 User defined options 00:02:41.986 libdir : lib 00:02:41.986 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:41.986 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:41.986 c_link_args : 00:02:41.986 enable_docs : false 00:02:41.986 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:41.986 enable_kmods : false 00:02:42.923 machine : native 00:02:42.923 tests : false 00:02:42.923 00:02:42.923 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:42.923 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:42.923 16:38:06 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:43.181 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:43.181 [1/769] Compiling C object lib/librte_log.a.p/log_log_syslog.c.o 00:02:43.181 [2/769] Compiling C object lib/librte_log.a.p/log_log_journal.c.o 00:02:43.181 [3/769] Compiling C object lib/librte_log.a.p/log_log_color.c.o 00:02:43.181 [4/769] Compiling C object lib/librte_log.a.p/log_log_timestamp.c.o 00:02:43.181 [5/769] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:43.181 [6/769] Linking static target lib/librte_kvargs.a 00:02:43.181 [7/769] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:43.439 [8/769] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:43.439 [9/769] Linking static target lib/librte_log.a 00:02:43.439 [10/769] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:43.439 [11/769] Linking static target lib/librte_argparse.a 00:02:43.439 [12/769] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.697 [13/769] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.697 [14/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:43.697 [15/769] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:43.697 [16/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:43.697 [17/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:43.697 [18/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:43.697 [19/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:43.956 [20/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:43.956 [21/769] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.956 [22/769] Linking target lib/librte_log.so.25.0 00:02:44.214 [23/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:44.214 [24/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:44.214 [25/769] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:02:44.472 [26/769] Linking target lib/librte_kvargs.so.25.0 00:02:44.472 [27/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore_var.c.o 00:02:44.472 [28/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:44.472 [29/769] Linking target lib/librte_argparse.so.25.0 00:02:44.472 [30/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:44.472 [31/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:44.472 [32/769] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:44.472 [33/769] Linking static target lib/librte_telemetry.a 00:02:44.472 [34/769] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:02:44.472 [35/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:44.730 [36/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:44.730 [37/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:44.730 [38/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:44.989 [39/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:44.989 [40/769] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.989 [41/769] Linking target lib/librte_telemetry.so.25.0 00:02:44.989 [42/769] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:02:45.247 [43/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:45.247 [44/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:45.247 [45/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:45.247 [46/769] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:45.247 [47/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:45.247 [48/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:45.247 [49/769] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:45.247 [50/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:45.247 [51/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:45.247 [52/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:45.505 [53/769] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:45.763 [54/769] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:02:45.763 [55/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:45.763 [56/769] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:45.763 [57/769] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:46.021 [58/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:46.021 [59/769] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:46.021 [60/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:46.021 [61/769] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:46.279 [62/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:46.279 [63/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:46.279 [64/769] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:46.538 [65/769] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:46.538 [66/769] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:46.538 [67/769] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:46.538 [68/769] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:46.538 [69/769] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:46.797 [70/769] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:46.797 [71/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:46.797 [72/769] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:46.797 [73/769] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:46.797 [74/769] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:47.055 [75/769] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:47.055 [76/769] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:47.313 [77/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:47.313 [78/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:47.313 [79/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:47.313 [80/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:47.313 [81/769] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:47.571 [82/769] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:47.571 [83/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:47.571 [84/769] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:47.571 [85/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:47.571 [86/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:47.571 [87/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:47.829 [88/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:47.829 [89/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:48.088 [90/769] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:48.088 [91/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:48.088 [92/769] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:48.088 [93/769] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:48.346 [94/769] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:48.346 [95/769] Linking static target lib/librte_ring.a 00:02:48.346 [96/769] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:48.346 [97/769] Linking static target lib/librte_eal.a 00:02:48.604 [98/769] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:48.604 [99/769] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.604 [100/769] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:48.604 [101/769] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:48.604 [102/769] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:48.862 [103/769] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:48.862 [104/769] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:48.862 [105/769] Linking static target lib/librte_mempool.a 00:02:49.121 [106/769] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.121 [107/769] Linking static target lib/librte_rcu.a 00:02:49.121 [108/769] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:49.121 [109/769] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:49.380 [110/769] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.380 [111/769] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:49.380 [112/769] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:49.380 [113/769] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:49.380 [114/769] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:49.380 [115/769] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:49.380 [116/769] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.638 [117/769] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:49.638 [118/769] Linking static target lib/librte_mbuf.a 00:02:49.896 [119/769] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:49.896 [120/769] Linking static target lib/librte_net.a 00:02:49.896 [121/769] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:49.896 [122/769] Linking static target lib/librte_meter.a 00:02:50.155 [123/769] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:50.155 [124/769] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.155 [125/769] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.155 [126/769] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:50.155 [127/769] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.413 [128/769] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.413 [129/769] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:50.980 [130/769] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:50.980 [131/769] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:51.239 [132/769] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:51.239 [133/769] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:51.239 [134/769] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:51.495 [135/769] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:51.495 [136/769] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:51.495 [137/769] Linking static target lib/librte_pci.a 00:02:51.495 [138/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:51.752 [139/769] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.752 [140/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:51.752 [141/769] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.752 [142/769] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:51.752 [143/769] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:51.752 [144/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:51.752 [145/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:52.009 [146/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:52.009 [147/769] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:52.009 [148/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:52.009 [149/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:52.009 [150/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:52.009 [151/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:52.009 [152/769] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:52.266 [153/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:52.266 [154/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:52.266 [155/769] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:52.266 [156/769] Linking static target lib/librte_cmdline.a 00:02:52.523 [157/769] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:52.781 [158/769] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:52.781 [159/769] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:52.781 [160/769] Linking static target lib/librte_metrics.a 00:02:52.781 [161/769] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:52.781 [162/769] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:53.040 [163/769] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:53.040 [164/769] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.298 [165/769] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.556 [166/769] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:53.556 [167/769] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gf2_poly_math.c.o 00:02:53.556 [168/769] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:53.556 [169/769] Linking static target lib/librte_timer.a 00:02:54.127 [170/769] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.127 [171/769] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:54.127 [172/769] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:54.403 [173/769] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:54.671 [174/769] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:54.671 [175/769] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:54.671 [176/769] Linking static target lib/librte_ethdev.a 00:02:54.929 [177/769] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.929 [178/769] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:55.186 [179/769] Linking target lib/librte_eal.so.25.0 00:02:55.186 [180/769] Linking static target lib/librte_bitratestats.a 00:02:55.186 [181/769] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:55.187 [182/769] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:55.187 [183/769] Linking static target lib/librte_hash.a 00:02:55.187 [184/769] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:02:55.187 [185/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:55.187 [186/769] Linking target lib/librte_ring.so.25.0 00:02:55.187 [187/769] Linking target lib/librte_meter.so.25.0 00:02:55.187 [188/769] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:55.444 [189/769] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.444 [190/769] Linking target lib/librte_pci.so.25.0 00:02:55.444 [191/769] Linking target lib/librte_timer.so.25.0 00:02:55.444 [192/769] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:02:55.444 [193/769] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:02:55.444 [194/769] Linking target lib/librte_rcu.so.25.0 00:02:55.444 [195/769] Linking target lib/librte_mempool.so.25.0 00:02:55.444 [196/769] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:02:55.444 [197/769] Linking static target lib/librte_bbdev.a 00:02:55.444 [198/769] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:02:55.444 [199/769] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:02:55.702 [200/769] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:02:55.702 [201/769] Linking target lib/librte_mbuf.so.25.0 00:02:55.702 [202/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:55.702 [203/769] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:02:55.702 [204/769] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:55.702 [205/769] Linking static target lib/acl/libavx2_tmp.a 00:02:55.702 [206/769] Linking target lib/librte_net.so.25.0 00:02:55.960 [207/769] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.960 [208/769] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:02:55.961 [209/769] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.961 [210/769] Linking target lib/librte_cmdline.so.25.0 00:02:55.961 [211/769] Linking target lib/librte_bbdev.so.25.0 00:02:55.961 [212/769] Linking target lib/librte_hash.so.25.0 00:02:56.218 [213/769] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:02:56.218 [214/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:56.218 [215/769] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:56.218 [216/769] Linking static target lib/acl/libavx512_tmp.a 00:02:56.218 [217/769] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:56.218 [218/769] Linking static target lib/librte_acl.a 00:02:56.476 [219/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:56.476 [220/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:56.734 [221/769] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:56.734 [222/769] Linking static target lib/librte_cfgfile.a 00:02:56.734 [223/769] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.734 [224/769] Linking target lib/librte_acl.so.25.0 00:02:56.734 [225/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:56.734 [226/769] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:02:56.992 [227/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:56.992 [228/769] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.992 [229/769] Linking target lib/librte_cfgfile.so.25.0 00:02:56.992 [230/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:56.992 [231/769] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:57.250 [232/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:57.251 [233/769] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:57.508 [234/769] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:57.508 [235/769] Linking static target lib/librte_bpf.a 00:02:57.508 [236/769] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:57.508 [237/769] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:57.508 [238/769] Linking static target lib/librte_compressdev.a 00:02:57.766 [239/769] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:57.766 [240/769] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.766 [241/769] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:57.766 [242/769] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:58.025 [243/769] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:58.025 [244/769] Linking static target lib/librte_distributor.a 00:02:58.283 [245/769] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:58.283 [246/769] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.283 [247/769] Linking target lib/librte_compressdev.so.25.0 00:02:58.283 [248/769] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:58.283 [249/769] Linking static target lib/librte_dmadev.a 00:02:58.542 [250/769] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.542 [251/769] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:58.542 [252/769] Linking target lib/librte_distributor.so.25.0 00:02:58.801 [253/769] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.801 [254/769] Linking target lib/librte_dmadev.so.25.0 00:02:58.801 [255/769] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:59.059 [256/769] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:02:59.318 [257/769] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:59.318 [258/769] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:59.577 [259/769] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:59.577 [260/769] Linking static target lib/librte_efd.a 00:02:59.577 [261/769] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:59.835 [262/769] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.836 [263/769] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:59.836 [264/769] Linking static target lib/librte_cryptodev.a 00:02:59.836 [265/769] Linking target lib/librte_efd.so.25.0 00:03:00.094 [266/769] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:00.094 [267/769] Linking static target lib/librte_dispatcher.a 00:03:00.094 [268/769] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:00.352 [269/769] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.352 [270/769] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:00.352 [271/769] Linking static target lib/librte_gpudev.a 00:03:00.352 [272/769] Linking target lib/librte_ethdev.so.25.0 00:03:00.352 [273/769] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:00.610 [274/769] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:00.610 [275/769] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.610 [276/769] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:03:00.610 [277/769] Linking target lib/librte_metrics.so.25.0 00:03:00.610 [278/769] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:00.610 [279/769] Linking target lib/librte_bpf.so.25.0 00:03:00.868 [280/769] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:03:00.868 [281/769] Linking target lib/librte_bitratestats.so.25.0 00:03:00.868 [282/769] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:03:00.868 [283/769] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:00.868 [284/769] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:01.130 [285/769] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.130 [286/769] Linking target lib/librte_cryptodev.so.25.0 00:03:01.130 [287/769] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.130 [288/769] Linking target lib/librte_gpudev.so.25.0 00:03:01.130 [289/769] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:03:01.387 [290/769] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:01.387 [291/769] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:01.387 [292/769] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:01.387 [293/769] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:01.387 [294/769] Linking static target lib/librte_eventdev.a 00:03:01.387 [295/769] Linking static target lib/librte_gro.a 00:03:01.387 [296/769] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:01.645 [297/769] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:01.645 [298/769] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:01.645 [299/769] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.645 [300/769] Linking target lib/librte_gro.so.25.0 00:03:01.902 [301/769] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:01.902 [302/769] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:01.902 [303/769] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:01.902 [304/769] Linking static target lib/librte_gso.a 00:03:02.159 [305/769] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.159 [306/769] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:02.159 [307/769] Linking target lib/librte_gso.so.25.0 00:03:02.159 [308/769] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:02.417 [309/769] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:02.417 [310/769] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:02.417 [311/769] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:02.417 [312/769] Linking static target lib/librte_jobstats.a 00:03:02.417 [313/769] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:02.675 [314/769] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:02.675 [315/769] Linking static target lib/librte_ip_frag.a 00:03:02.675 [316/769] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:02.675 [317/769] Linking static target lib/librte_latencystats.a 00:03:02.675 [318/769] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.932 [319/769] Linking target lib/librte_jobstats.so.25.0 00:03:02.932 [320/769] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.932 [321/769] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:02.932 [322/769] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.932 [323/769] Linking target lib/librte_ip_frag.so.25.0 00:03:02.932 [324/769] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:02.932 [325/769] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:02.932 [326/769] Linking target lib/librte_latencystats.so.25.0 00:03:03.190 [327/769] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:03.190 [328/769] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:03.190 [329/769] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:03:03.190 [330/769] Compiling C object lib/librte_power.a.p/power_rte_power_qos.c.o 00:03:03.447 [331/769] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:03.447 [332/769] Linking static target lib/librte_lpm.a 00:03:03.705 [333/769] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:03.705 [334/769] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.705 [335/769] Compiling C object lib/librte_power.a.p/power_rte_power_cpufreq.c.o 00:03:03.705 [336/769] Linking target lib/librte_eventdev.so.25.0 00:03:03.705 [337/769] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:03.705 [338/769] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.964 [339/769] Linking target lib/librte_lpm.so.25.0 00:03:03.964 [340/769] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:03:03.964 [341/769] Linking target lib/librte_dispatcher.so.25.0 00:03:03.964 [342/769] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:03.964 [343/769] Linking static target lib/librte_power.a 00:03:03.964 [344/769] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:03:03.964 [345/769] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:03.964 [346/769] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:03.964 [347/769] Linking static target lib/librte_pcapng.a 00:03:04.223 [348/769] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:04.223 [349/769] Linking static target lib/librte_rawdev.a 00:03:04.223 [350/769] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.223 [351/769] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:04.223 [352/769] Linking target lib/librte_pcapng.so.25.0 00:03:04.482 [353/769] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:04.482 [354/769] Linking static target lib/librte_regexdev.a 00:03:04.482 [355/769] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:03:04.482 [356/769] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:04.482 [357/769] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.789 [358/769] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:04.789 [359/769] Linking target lib/librte_rawdev.so.25.0 00:03:04.789 [360/769] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:04.789 [361/769] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:04.789 [362/769] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.789 [363/769] Linking target lib/librte_power.so.25.0 00:03:04.789 [364/769] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:04.789 [365/769] Linking static target lib/librte_mldev.a 00:03:05.048 [366/769] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:05.048 [367/769] Linking static target lib/librte_member.a 00:03:05.048 [368/769] Generating symbol file lib/librte_power.so.25.0.p/librte_power.so.25.0.symbols 00:03:05.306 [369/769] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:05.306 [370/769] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.306 [371/769] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.306 [372/769] Linking target lib/librte_regexdev.so.25.0 00:03:05.306 [373/769] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:05.306 [374/769] Linking target lib/librte_member.so.25.0 00:03:05.306 [375/769] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:05.565 [376/769] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:05.565 [377/769] Linking static target lib/librte_reorder.a 00:03:05.565 [378/769] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:05.565 [379/769] Linking static target lib/librte_rib.a 00:03:05.823 [380/769] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.823 [381/769] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:05.823 [382/769] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:05.823 [383/769] Linking target lib/librte_reorder.so.25.0 00:03:05.823 [384/769] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:05.823 [385/769] Linking static target lib/librte_stack.a 00:03:06.081 [386/769] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:06.081 [387/769] Linking static target lib/librte_security.a 00:03:06.081 [388/769] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:03:06.081 [389/769] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.081 [390/769] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:06.081 [391/769] Linking target lib/librte_rib.so.25.0 00:03:06.081 [392/769] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.081 [393/769] Linking target lib/librte_stack.so.25.0 00:03:06.081 [394/769] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:03:06.340 [395/769] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:06.340 [396/769] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.340 [397/769] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.340 [398/769] Linking target lib/librte_mldev.so.25.0 00:03:06.340 [399/769] Linking target lib/librte_security.so.25.0 00:03:06.600 [400/769] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:03:06.600 [401/769] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:06.859 [402/769] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:06.859 [403/769] Linking static target lib/librte_sched.a 00:03:06.859 [404/769] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:07.118 [405/769] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:07.118 [406/769] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.118 [407/769] Linking target lib/librte_sched.so.25.0 00:03:07.376 [408/769] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:03:07.376 [409/769] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:07.635 [410/769] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:07.635 [411/769] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:07.893 [412/769] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:07.893 [413/769] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:08.151 [414/769] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:08.410 [415/769] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:08.410 [416/769] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:08.410 [417/769] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:08.668 [418/769] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:08.668 [419/769] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:08.668 [420/769] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:08.668 [421/769] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:08.926 [422/769] Linking static target lib/librte_ipsec.a 00:03:09.184 [423/769] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.184 [424/769] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:09.184 [425/769] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:09.184 [426/769] Linking target lib/librte_ipsec.so.25.0 00:03:09.184 [427/769] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:09.184 [428/769] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:03:09.184 [429/769] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:03:09.184 [430/769] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:09.184 [431/769] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:09.184 [432/769] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:09.443 [433/769] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:10.010 [434/769] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:10.010 [435/769] Linking static target lib/librte_pdcp.a 00:03:10.268 [436/769] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:10.268 [437/769] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:10.268 [438/769] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:10.268 [439/769] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:10.268 [440/769] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:10.268 [441/769] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.526 [442/769] Linking target lib/librte_pdcp.so.25.0 00:03:10.785 [443/769] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:10.785 [444/769] Linking static target lib/librte_fib.a 00:03:10.785 [445/769] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:11.044 [446/769] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.044 [447/769] Linking target lib/librte_fib.so.25.0 00:03:11.303 [448/769] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:11.303 [449/769] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:11.303 [450/769] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:11.303 [451/769] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:11.560 [452/769] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:11.560 [453/769] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:11.817 [454/769] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:12.075 [455/769] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:12.075 [456/769] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:12.075 [457/769] Linking static target lib/librte_port.a 00:03:12.075 [458/769] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:12.333 [459/769] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:12.333 [460/769] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:12.333 [461/769] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:12.591 [462/769] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:12.591 [463/769] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:12.591 [464/769] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:12.591 [465/769] Linking static target lib/librte_pdump.a 00:03:12.591 [466/769] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.591 [467/769] Linking target lib/librte_port.so.25.0 00:03:12.849 [468/769] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:12.849 [469/769] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:03:12.849 [470/769] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:12.849 [471/769] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.849 [472/769] Linking target lib/librte_pdump.so.25.0 00:03:13.107 [473/769] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:03:13.366 [474/769] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:13.366 [475/769] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:13.366 [476/769] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:13.625 [477/769] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:13.625 [478/769] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:13.625 [479/769] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:13.884 [480/769] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:13.884 [481/769] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:13.884 [482/769] Linking static target lib/librte_table.a 00:03:14.143 [483/769] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:14.143 [484/769] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:14.710 [485/769] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.710 [486/769] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:14.710 [487/769] Linking target lib/librte_table.so.25.0 00:03:14.710 [488/769] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:03:14.710 [489/769] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:14.969 [490/769] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:15.228 [491/769] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:15.228 [492/769] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:15.486 [493/769] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:15.486 [494/769] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:15.745 [495/769] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:15.745 [496/769] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:15.745 [497/769] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:16.312 [498/769] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:16.312 [499/769] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:16.312 [500/769] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:16.312 [501/769] Linking static target lib/librte_graph.a 00:03:16.312 [502/769] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:16.571 [503/769] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:16.571 [504/769] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:17.138 [505/769] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.138 [506/769] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:17.138 [507/769] Linking target lib/librte_graph.so.25.0 00:03:17.138 [508/769] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:03:17.138 [509/769] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:17.138 [510/769] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:17.396 [511/769] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:17.654 [512/769] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:17.654 [513/769] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:17.654 [514/769] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:17.912 [515/769] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:17.913 [516/769] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:17.913 [517/769] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:18.171 [518/769] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:18.171 [519/769] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:18.430 [520/769] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:18.430 [521/769] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:18.430 [522/769] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:18.689 [523/769] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:18.689 [524/769] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:18.689 [525/769] Linking static target lib/librte_node.a 00:03:18.689 [526/769] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:18.947 [527/769] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.947 [528/769] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:18.947 [529/769] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:18.947 [530/769] Linking target lib/librte_node.so.25.0 00:03:19.206 [531/769] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:19.206 [532/769] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:19.206 [533/769] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:19.206 [534/769] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.206 [535/769] Linking static target drivers/librte_bus_vdev.a 00:03:19.206 [536/769] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:19.465 [537/769] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:19.465 [538/769] Linking static target drivers/librte_bus_pci.a 00:03:19.465 [539/769] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.465 [540/769] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:19.465 [541/769] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.465 [542/769] Linking target drivers/librte_bus_vdev.so.25.0 00:03:19.724 [543/769] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:19.724 [544/769] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:19.724 [545/769] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:03:19.724 [546/769] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:19.724 [547/769] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:19.724 [548/769] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:19.983 [549/769] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.983 [550/769] Linking target drivers/librte_bus_pci.so.25.0 00:03:19.983 [551/769] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:19.983 [552/769] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:03:19.983 [553/769] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:19.983 [554/769] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:19.983 [555/769] Linking static target drivers/librte_mempool_ring.a 00:03:19.983 [556/769] Linking target drivers/librte_mempool_ring.so.25.0 00:03:20.242 [557/769] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:20.501 [558/769] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:21.068 [559/769] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:21.068 [560/769] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:21.068 [561/769] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:21.327 [562/769] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:21.892 [563/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:21.893 [564/769] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:21.893 [565/769] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:21.893 [566/769] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:21.893 [567/769] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:22.154 [568/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:22.440 [569/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:22.727 [570/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:22.727 [571/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:22.989 [572/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:22.989 [573/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:23.556 [574/769] Compiling C object drivers/libtmp_rte_power_acpi.a.p/power_acpi_acpi_cpufreq.c.o 00:03:23.556 [575/769] Linking static target drivers/libtmp_rte_power_acpi.a 00:03:23.556 [576/769] Compiling C object drivers/libtmp_rte_power_amd_pstate.a.p/power_amd_pstate_amd_pstate_cpufreq.c.o 00:03:23.556 [577/769] Linking static target drivers/libtmp_rte_power_amd_pstate.a 00:03:23.556 [578/769] Generating drivers/rte_power_acpi.pmd.c with a custom command 00:03:23.556 [579/769] Compiling C object drivers/librte_power_acpi.a.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:03:23.556 [580/769] Linking static target drivers/librte_power_acpi.a 00:03:23.814 [581/769] Compiling C object drivers/librte_power_acpi.so.25.0.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:03:23.814 [582/769] Compiling C object drivers/libtmp_rte_power_cppc.a.p/power_cppc_cppc_cpufreq.c.o 00:03:23.814 [583/769] Linking static target drivers/libtmp_rte_power_cppc.a 00:03:23.814 [584/769] Linking target drivers/librte_power_acpi.so.25.0 00:03:23.814 [585/769] Generating drivers/rte_power_amd_pstate.pmd.c with a custom command 00:03:23.814 [586/769] Compiling C object drivers/librte_power_amd_pstate.a.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:03:23.814 [587/769] Linking static target drivers/librte_power_amd_pstate.a 00:03:23.814 [588/769] Compiling C object drivers/librte_power_amd_pstate.so.25.0.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:03:23.814 [589/769] Linking target drivers/librte_power_amd_pstate.so.25.0 00:03:23.814 [590/769] Compiling C object drivers/libtmp_rte_power_intel_pstate.a.p/power_intel_pstate_intel_pstate_cpufreq.c.o 00:03:23.814 [591/769] Linking static target drivers/libtmp_rte_power_intel_pstate.a 00:03:23.814 [592/769] Generating drivers/rte_power_cppc.pmd.c with a custom command 00:03:23.814 [593/769] Compiling C object drivers/librte_power_cppc.a.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:03:23.814 [594/769] Linking static target drivers/librte_power_cppc.a 00:03:23.814 [595/769] Compiling C object drivers/librte_power_cppc.so.25.0.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:03:23.814 [596/769] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_guest_channel.c.o 00:03:24.073 [597/769] Linking target drivers/librte_power_cppc.so.25.0 00:03:24.073 [598/769] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_kvm_vm.c.o 00:03:24.073 [599/769] Linking static target drivers/libtmp_rte_power_kvm_vm.a 00:03:24.073 [600/769] Generating drivers/rte_power_intel_pstate.pmd.c with a custom command 00:03:24.073 [601/769] Compiling C object drivers/librte_power_intel_pstate.a.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:03:24.073 [602/769] Linking static target drivers/librte_power_intel_pstate.a 00:03:24.073 [603/769] Compiling C object drivers/librte_power_intel_pstate.so.25.0.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:03:24.073 [604/769] Linking target drivers/librte_power_intel_pstate.so.25.0 00:03:24.332 [605/769] Generating drivers/rte_power_kvm_vm.pmd.c with a custom command 00:03:24.332 [606/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:24.332 [607/769] Compiling C object drivers/librte_power_kvm_vm.a.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:03:24.332 [608/769] Compiling C object drivers/librte_power_kvm_vm.so.25.0.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:03:24.332 [609/769] Linking static target drivers/librte_power_kvm_vm.a 00:03:24.332 [610/769] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:24.332 [611/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:24.332 [612/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:24.332 [613/769] Generating drivers/rte_power_kvm_vm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.591 [614/769] Linking target drivers/librte_power_kvm_vm.so.25.0 00:03:24.591 [615/769] Compiling C object drivers/libtmp_rte_power_intel_uncore.a.p/power_intel_uncore_intel_uncore.c.o 00:03:24.591 [616/769] Linking static target drivers/libtmp_rte_power_intel_uncore.a 00:03:24.591 [617/769] Generating drivers/rte_power_intel_uncore.pmd.c with a custom command 00:03:24.591 [618/769] Compiling C object drivers/librte_power_intel_uncore.a.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:03:24.591 [619/769] Linking static target drivers/librte_power_intel_uncore.a 00:03:24.849 [620/769] Compiling C object drivers/librte_power_intel_uncore.so.25.0.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:03:24.849 [621/769] Linking target drivers/librte_power_intel_uncore.so.25.0 00:03:24.849 [622/769] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:24.849 [623/769] Linking static target lib/librte_vhost.a 00:03:24.849 [624/769] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:25.108 [625/769] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:25.108 [626/769] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:25.108 [627/769] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:25.108 [628/769] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:25.366 [629/769] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:25.366 [630/769] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:25.624 [631/769] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:25.624 [632/769] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:25.624 [633/769] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:25.624 [634/769] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:25.625 [635/769] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:25.625 [636/769] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:25.625 [637/769] Linking static target drivers/librte_net_i40e.a 00:03:25.625 [638/769] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:25.883 [639/769] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:25.883 [640/769] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:26.142 [641/769] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.142 [642/769] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:26.142 [643/769] Linking target lib/librte_vhost.so.25.0 00:03:26.142 [644/769] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:26.401 [645/769] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.401 [646/769] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:26.401 [647/769] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:26.401 [648/769] Linking target drivers/librte_net_i40e.so.25.0 00:03:26.401 [649/769] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:26.401 [650/769] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:26.401 [651/769] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:26.969 [652/769] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:27.229 [653/769] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:27.229 [654/769] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:27.487 [655/769] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:27.487 [656/769] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:27.487 [657/769] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:27.745 [658/769] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:27.745 [659/769] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:28.003 [660/769] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:28.003 [661/769] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:28.273 [662/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:28.273 [663/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:28.531 [664/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:28.531 [665/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:28.790 [666/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:28.790 [667/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:28.790 [668/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:28.790 [669/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:28.790 [670/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:29.050 [671/769] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:29.050 [672/769] Linking static target lib/librte_pipeline.a 00:03:29.050 [673/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:29.309 [674/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:29.309 [675/769] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:29.309 [676/769] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:29.568 [677/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:29.568 [678/769] Linking target app/dpdk-dumpcap 00:03:29.568 [679/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:29.827 [680/769] Linking target app/dpdk-graph 00:03:29.827 [681/769] Linking target app/dpdk-pdump 00:03:29.827 [682/769] Linking target app/dpdk-proc-info 00:03:29.827 [683/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:29.827 [684/769] Linking target app/dpdk-test-acl 00:03:30.086 [685/769] Linking target app/dpdk-test-cmdline 00:03:30.086 [686/769] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:30.086 [687/769] Linking target app/dpdk-test-compress-perf 00:03:30.086 [688/769] Linking target app/dpdk-test-crypto-perf 00:03:30.653 [689/769] Linking target app/dpdk-test-dma-perf 00:03:30.653 [690/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:30.653 [691/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:31.588 [692/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:31.588 [693/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:31.588 [694/769] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:31.846 [695/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:31.846 [696/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:31.846 [697/769] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.846 [698/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:31.846 [699/769] Linking target lib/librte_pipeline.so.25.0 00:03:31.846 [700/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:31.846 [701/769] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:32.105 [702/769] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:32.105 [703/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:32.105 [704/769] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:32.364 [705/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:32.364 [706/769] Linking target app/dpdk-test-fib 00:03:32.623 [707/769] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:32.623 [708/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:32.882 [709/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:32.882 [710/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:32.882 [711/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:32.882 [712/769] Linking target app/dpdk-test-gpudev 00:03:33.140 [713/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:33.140 [714/769] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:33.140 [715/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:33.400 [716/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:33.400 [717/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:33.659 [718/769] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:33.659 [719/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:33.659 [720/769] Linking target app/dpdk-test-flow-perf 00:03:33.659 [721/769] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:33.659 [722/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:33.918 [723/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:33.918 [724/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:33.918 [725/769] Linking target app/dpdk-test-bbdev 00:03:34.178 [726/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:34.178 [727/769] Linking target app/dpdk-test-eventdev 00:03:34.178 [728/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:34.178 [729/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:34.178 [730/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:34.746 [731/769] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:34.746 [732/769] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:34.746 [733/769] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:34.746 [734/769] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:35.006 [735/769] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:35.006 [736/769] Linking target app/dpdk-test-mldev 00:03:35.006 [737/769] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:35.264 [738/769] Linking target app/dpdk-test-pipeline 00:03:35.265 [739/769] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:35.523 [740/769] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:35.782 [741/769] Compiling C object app/dpdk-testpmd.p/test-pmd_hairpin.c.o 00:03:35.782 [742/769] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:36.040 [743/769] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:36.040 [744/769] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:36.040 [745/769] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:36.299 [746/769] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:36.299 [747/769] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:36.558 [748/769] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:36.818 [749/769] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:36.818 [750/769] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:36.818 [751/769] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:37.077 [752/769] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:37.336 [753/769] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:37.594 [754/769] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:37.594 [755/769] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:37.853 [756/769] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:37.853 [757/769] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:37.853 [758/769] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:38.112 [759/769] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:38.112 [760/769] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:38.371 [761/769] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:38.371 [762/769] Linking target app/dpdk-test-sad 00:03:38.371 [763/769] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:38.371 [764/769] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:38.371 [765/769] Linking target app/dpdk-test-regex 00:03:38.630 [766/769] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:38.889 [767/769] Linking target app/dpdk-testpmd 00:03:38.889 [768/769] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:39.456 [769/769] Linking target app/dpdk-test-security-perf 00:03:39.456 16:39:03 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:39.456 16:39:03 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:39.456 16:39:03 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:39.456 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:39.456 [0/1] Installing files. 00:03:40.027 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:40.027 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:40.027 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_skeleton.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.028 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:40.029 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.030 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:40.031 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:40.032 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:40.032 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.032 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.033 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:40.603 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:40.603 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:40.603 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:40.603 Installing drivers/librte_power_acpi.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing drivers/librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:40.603 Installing drivers/librte_power_amd_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing drivers/librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:40.603 Installing drivers/librte_power_cppc.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing drivers/librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:40.603 Installing drivers/librte_power_intel_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing drivers/librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:40.603 Installing drivers/librte_power_intel_uncore.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing drivers/librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:40.603 Installing drivers/librte_power_kvm_vm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.603 Installing drivers/librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:40.603 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore_var.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_uncore_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_qos.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/drivers/power/kvm_vm/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:40.606 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:40.606 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:03:40.606 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:40.606 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:03:40.606 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:40.606 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:03:40.606 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:40.606 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:03:40.606 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:40.606 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:03:40.606 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:40.606 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:03:40.606 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:40.606 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:03:40.606 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:40.606 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:03:40.606 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:40.606 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:03:40.606 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:40.606 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:03:40.606 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:40.606 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:03:40.606 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:40.606 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:03:40.606 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:40.606 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:03:40.606 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:40.606 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:03:40.606 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:40.606 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:03:40.607 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:40.607 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:03:40.607 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:40.607 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:03:40.607 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:40.607 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:03:40.607 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:40.607 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:03:40.607 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:40.607 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:03:40.607 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:40.607 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:03:40.607 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:40.607 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:03:40.607 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:40.607 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:03:40.607 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:40.607 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:03:40.607 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:40.607 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:03:40.607 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:40.607 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:03:40.607 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:40.607 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:03:40.607 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:40.607 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:03:40.607 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:40.607 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:03:40.607 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:40.607 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:03:40.607 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:40.607 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:03:40.607 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:40.607 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:03:40.607 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:40.607 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:03:40.607 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:40.607 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:03:40.607 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:40.607 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:03:40.607 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:40.607 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:03:40.607 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:40.607 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:03:40.607 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:40.607 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:03:40.607 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:40.607 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:03:40.607 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:40.607 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:03:40.607 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:40.607 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:03:40.607 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:40.607 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:03:40.607 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:40.607 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:03:40.607 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:40.607 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:03:40.607 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:40.607 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:03:40.607 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:40.607 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:03:40.607 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:40.607 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:03:40.607 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:40.607 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:03:40.607 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:40.607 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:03:40.607 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:40.607 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:03:40.607 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:40.607 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:03:40.607 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:40.607 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:03:40.607 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:40.607 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:03:40.607 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:40.607 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:03:40.607 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:40.607 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:03:40.607 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:40.607 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:03:40.607 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:40.607 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:03:40.607 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:40.607 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:03:40.607 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:03:40.607 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:03:40.607 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:03:40.607 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:03:40.608 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:03:40.608 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:03:40.608 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:03:40.608 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:03:40.608 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:03:40.608 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:03:40.608 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:03:40.608 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:03:40.608 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:03:40.608 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:03:40.608 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:03:40.608 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:03:40.608 './librte_power_acpi.so' -> 'dpdk/pmds-25.0/librte_power_acpi.so' 00:03:40.608 './librte_power_acpi.so.25' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25' 00:03:40.608 './librte_power_acpi.so.25.0' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25.0' 00:03:40.608 './librte_power_amd_pstate.so' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so' 00:03:40.608 './librte_power_amd_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25' 00:03:40.608 './librte_power_amd_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0' 00:03:40.608 './librte_power_cppc.so' -> 'dpdk/pmds-25.0/librte_power_cppc.so' 00:03:40.608 './librte_power_cppc.so.25' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25' 00:03:40.608 './librte_power_cppc.so.25.0' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25.0' 00:03:40.608 './librte_power_intel_pstate.so' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so' 00:03:40.608 './librte_power_intel_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25' 00:03:40.608 './librte_power_intel_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0' 00:03:40.608 './librte_power_intel_uncore.so' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so' 00:03:40.608 './librte_power_intel_uncore.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25' 00:03:40.608 './librte_power_intel_uncore.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0' 00:03:40.608 './librte_power_kvm_vm.so' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so' 00:03:40.608 './librte_power_kvm_vm.so.25' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25' 00:03:40.608 './librte_power_kvm_vm.so.25.0' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0' 00:03:40.608 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:03:40.608 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:03:40.608 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:03:40.608 Installing symlink pointing to librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25 00:03:40.608 Installing symlink pointing to librte_power_acpi.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:03:40.608 Installing symlink pointing to librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25 00:03:40.608 Installing symlink pointing to librte_power_amd_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:03:40.608 Installing symlink pointing to librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25 00:03:40.608 Installing symlink pointing to librte_power_cppc.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:03:40.608 Installing symlink pointing to librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25 00:03:40.608 Installing symlink pointing to librte_power_intel_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:03:40.608 Installing symlink pointing to librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25 00:03:40.608 Installing symlink pointing to librte_power_intel_uncore.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:03:40.608 Installing symlink pointing to librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25 00:03:40.608 Installing symlink pointing to librte_power_kvm_vm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:03:40.608 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:03:40.608 16:39:04 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:40.608 16:39:04 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:40.608 00:03:40.608 real 1m4.999s 00:03:40.608 user 7m59.216s 00:03:40.608 sys 1m9.236s 00:03:40.608 16:39:04 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:40.608 16:39:04 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:40.608 ************************************ 00:03:40.608 END TEST build_native_dpdk 00:03:40.608 ************************************ 00:03:40.608 16:39:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:40.608 16:39:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:40.608 16:39:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:40.608 16:39:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:40.608 16:39:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:40.608 16:39:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:40.608 16:39:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:40.608 16:39:04 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:40.867 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:40.867 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.867 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:40.867 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:41.432 Using 'verbs' RDMA provider 00:03:54.597 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:09.467 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:09.468 Creating mk/config.mk...done. 00:04:09.468 Creating mk/cc.flags.mk...done. 00:04:09.468 Type 'make' to build. 00:04:09.468 16:39:31 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:09.468 16:39:31 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:09.468 16:39:31 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:09.468 16:39:31 -- common/autotest_common.sh@10 -- $ set +x 00:04:09.468 ************************************ 00:04:09.468 START TEST make 00:04:09.468 ************************************ 00:04:09.468 16:39:31 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:09.468 make[1]: Nothing to be done for 'all'. 00:05:05.713 CC lib/ut/ut.o 00:05:05.713 CC lib/ut_mock/mock.o 00:05:05.713 CC lib/log/log.o 00:05:05.713 CC lib/log/log_flags.o 00:05:05.713 CC lib/log/log_deprecated.o 00:05:05.713 LIB libspdk_ut.a 00:05:05.713 LIB libspdk_log.a 00:05:05.713 LIB libspdk_ut_mock.a 00:05:05.713 SO libspdk_ut.so.2.0 00:05:05.713 SO libspdk_ut_mock.so.6.0 00:05:05.713 SO libspdk_log.so.7.1 00:05:05.713 SYMLINK libspdk_ut.so 00:05:05.713 SYMLINK libspdk_ut_mock.so 00:05:05.713 SYMLINK libspdk_log.so 00:05:05.713 CXX lib/trace_parser/trace.o 00:05:05.713 CC lib/util/base64.o 00:05:05.713 CC lib/util/bit_array.o 00:05:05.713 CC lib/util/cpuset.o 00:05:05.713 CC lib/util/crc16.o 00:05:05.713 CC lib/util/crc32c.o 00:05:05.713 CC lib/util/crc32.o 00:05:05.713 CC lib/dma/dma.o 00:05:05.713 CC lib/ioat/ioat.o 00:05:05.713 CC lib/vfio_user/host/vfio_user_pci.o 00:05:05.713 CC lib/util/crc32_ieee.o 00:05:05.713 CC lib/vfio_user/host/vfio_user.o 00:05:05.713 CC lib/util/crc64.o 00:05:05.713 CC lib/util/dif.o 00:05:05.713 CC lib/util/fd.o 00:05:05.713 CC lib/util/fd_group.o 00:05:05.713 LIB libspdk_dma.a 00:05:05.713 CC lib/util/file.o 00:05:05.713 SO libspdk_dma.so.5.0 00:05:05.713 CC lib/util/hexlify.o 00:05:05.713 LIB libspdk_ioat.a 00:05:05.713 CC lib/util/iov.o 00:05:05.713 SYMLINK libspdk_dma.so 00:05:05.713 CC lib/util/math.o 00:05:05.713 CC lib/util/net.o 00:05:05.713 SO libspdk_ioat.so.7.0 00:05:05.713 LIB libspdk_vfio_user.a 00:05:05.713 SYMLINK libspdk_ioat.so 00:05:05.713 CC lib/util/pipe.o 00:05:05.713 CC lib/util/strerror_tls.o 00:05:05.713 SO libspdk_vfio_user.so.5.0 00:05:05.713 CC lib/util/string.o 00:05:05.713 SYMLINK libspdk_vfio_user.so 00:05:05.713 CC lib/util/uuid.o 00:05:05.713 CC lib/util/xor.o 00:05:05.713 CC lib/util/zipf.o 00:05:05.713 CC lib/util/md5.o 00:05:05.713 LIB libspdk_util.a 00:05:05.713 SO libspdk_util.so.10.1 00:05:05.713 LIB libspdk_trace_parser.a 00:05:05.713 SYMLINK libspdk_util.so 00:05:05.713 SO libspdk_trace_parser.so.6.0 00:05:05.713 SYMLINK libspdk_trace_parser.so 00:05:05.713 CC lib/json/json_parse.o 00:05:05.713 CC lib/json/json_util.o 00:05:05.713 CC lib/json/json_write.o 00:05:05.713 CC lib/env_dpdk/env.o 00:05:05.713 CC lib/conf/conf.o 00:05:05.713 CC lib/env_dpdk/pci.o 00:05:05.713 CC lib/env_dpdk/memory.o 00:05:05.713 CC lib/rdma_utils/rdma_utils.o 00:05:05.713 CC lib/vmd/vmd.o 00:05:05.713 CC lib/idxd/idxd.o 00:05:05.713 LIB libspdk_conf.a 00:05:05.713 CC lib/env_dpdk/init.o 00:05:05.713 CC lib/env_dpdk/threads.o 00:05:05.713 SO libspdk_conf.so.6.0 00:05:05.713 SYMLINK libspdk_conf.so 00:05:05.713 CC lib/env_dpdk/pci_ioat.o 00:05:05.713 LIB libspdk_rdma_utils.a 00:05:05.713 LIB libspdk_json.a 00:05:05.713 CC lib/env_dpdk/pci_virtio.o 00:05:05.713 SO libspdk_rdma_utils.so.1.0 00:05:05.713 SO libspdk_json.so.6.0 00:05:05.713 CC lib/vmd/led.o 00:05:05.713 SYMLINK libspdk_rdma_utils.so 00:05:05.713 CC lib/idxd/idxd_user.o 00:05:05.713 CC lib/env_dpdk/pci_vmd.o 00:05:05.713 SYMLINK libspdk_json.so 00:05:05.713 CC lib/env_dpdk/pci_idxd.o 00:05:05.713 CC lib/env_dpdk/pci_event.o 00:05:05.713 CC lib/idxd/idxd_kernel.o 00:05:05.713 CC lib/env_dpdk/sigbus_handler.o 00:05:05.713 CC lib/env_dpdk/pci_dpdk.o 00:05:05.713 LIB libspdk_vmd.a 00:05:05.713 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:05.713 SO libspdk_vmd.so.6.0 00:05:05.713 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:05.713 LIB libspdk_idxd.a 00:05:05.713 SYMLINK libspdk_vmd.so 00:05:05.713 CC lib/rdma_provider/common.o 00:05:05.713 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:05.713 CC lib/jsonrpc/jsonrpc_server.o 00:05:05.713 SO libspdk_idxd.so.12.1 00:05:05.713 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:05.713 CC lib/jsonrpc/jsonrpc_client.o 00:05:05.713 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:05.713 SYMLINK libspdk_idxd.so 00:05:05.713 LIB libspdk_rdma_provider.a 00:05:05.713 SO libspdk_rdma_provider.so.7.0 00:05:05.713 LIB libspdk_jsonrpc.a 00:05:05.713 SO libspdk_jsonrpc.so.6.0 00:05:05.713 SYMLINK libspdk_rdma_provider.so 00:05:05.973 SYMLINK libspdk_jsonrpc.so 00:05:05.973 LIB libspdk_env_dpdk.a 00:05:05.973 CC lib/rpc/rpc.o 00:05:06.233 SO libspdk_env_dpdk.so.15.1 00:05:06.233 SYMLINK libspdk_env_dpdk.so 00:05:06.233 LIB libspdk_rpc.a 00:05:06.233 SO libspdk_rpc.so.6.0 00:05:06.491 SYMLINK libspdk_rpc.so 00:05:06.491 CC lib/trace/trace.o 00:05:06.491 CC lib/trace/trace_flags.o 00:05:06.491 CC lib/keyring/keyring.o 00:05:06.491 CC lib/keyring/keyring_rpc.o 00:05:06.491 CC lib/trace/trace_rpc.o 00:05:06.749 CC lib/notify/notify.o 00:05:06.749 CC lib/notify/notify_rpc.o 00:05:06.749 LIB libspdk_notify.a 00:05:06.749 SO libspdk_notify.so.6.0 00:05:06.749 LIB libspdk_keyring.a 00:05:06.749 LIB libspdk_trace.a 00:05:06.749 SYMLINK libspdk_notify.so 00:05:07.008 SO libspdk_keyring.so.2.0 00:05:07.008 SO libspdk_trace.so.11.0 00:05:07.008 SYMLINK libspdk_keyring.so 00:05:07.008 SYMLINK libspdk_trace.so 00:05:07.267 CC lib/thread/thread.o 00:05:07.267 CC lib/thread/iobuf.o 00:05:07.267 CC lib/sock/sock.o 00:05:07.267 CC lib/sock/sock_rpc.o 00:05:07.835 LIB libspdk_sock.a 00:05:07.835 SO libspdk_sock.so.10.0 00:05:07.835 SYMLINK libspdk_sock.so 00:05:08.094 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:08.094 CC lib/nvme/nvme_ctrlr.o 00:05:08.094 CC lib/nvme/nvme_fabric.o 00:05:08.094 CC lib/nvme/nvme_ns.o 00:05:08.094 CC lib/nvme/nvme_pcie_common.o 00:05:08.094 CC lib/nvme/nvme_ns_cmd.o 00:05:08.094 CC lib/nvme/nvme_pcie.o 00:05:08.094 CC lib/nvme/nvme_qpair.o 00:05:08.094 CC lib/nvme/nvme.o 00:05:08.661 LIB libspdk_thread.a 00:05:08.920 SO libspdk_thread.so.11.0 00:05:08.920 CC lib/nvme/nvme_quirks.o 00:05:08.920 SYMLINK libspdk_thread.so 00:05:08.920 CC lib/nvme/nvme_transport.o 00:05:08.920 CC lib/nvme/nvme_discovery.o 00:05:08.920 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:08.920 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:09.179 CC lib/nvme/nvme_tcp.o 00:05:09.179 CC lib/nvme/nvme_opal.o 00:05:09.179 CC lib/nvme/nvme_io_msg.o 00:05:09.179 CC lib/nvme/nvme_poll_group.o 00:05:09.747 CC lib/nvme/nvme_zns.o 00:05:09.747 CC lib/nvme/nvme_stubs.o 00:05:09.747 CC lib/nvme/nvme_auth.o 00:05:09.747 CC lib/nvme/nvme_cuse.o 00:05:10.006 CC lib/nvme/nvme_rdma.o 00:05:10.006 CC lib/accel/accel.o 00:05:10.006 CC lib/blob/blobstore.o 00:05:10.006 CC lib/blob/request.o 00:05:10.266 CC lib/blob/zeroes.o 00:05:10.564 CC lib/init/json_config.o 00:05:10.564 CC lib/virtio/virtio.o 00:05:10.564 CC lib/virtio/virtio_vhost_user.o 00:05:10.564 CC lib/virtio/virtio_vfio_user.o 00:05:10.564 CC lib/virtio/virtio_pci.o 00:05:10.564 CC lib/init/subsystem.o 00:05:10.564 CC lib/accel/accel_rpc.o 00:05:10.849 CC lib/accel/accel_sw.o 00:05:10.849 CC lib/blob/blob_bs_dev.o 00:05:10.849 CC lib/init/subsystem_rpc.o 00:05:10.849 CC lib/init/rpc.o 00:05:10.849 LIB libspdk_virtio.a 00:05:10.849 SO libspdk_virtio.so.7.0 00:05:11.108 LIB libspdk_init.a 00:05:11.108 LIB libspdk_accel.a 00:05:11.108 CC lib/fsdev/fsdev_rpc.o 00:05:11.108 CC lib/fsdev/fsdev_io.o 00:05:11.108 CC lib/fsdev/fsdev.o 00:05:11.108 SYMLINK libspdk_virtio.so 00:05:11.108 SO libspdk_init.so.6.0 00:05:11.108 SO libspdk_accel.so.16.0 00:05:11.108 SYMLINK libspdk_init.so 00:05:11.108 SYMLINK libspdk_accel.so 00:05:11.367 LIB libspdk_nvme.a 00:05:11.367 CC lib/event/app.o 00:05:11.367 CC lib/event/reactor.o 00:05:11.367 CC lib/event/app_rpc.o 00:05:11.367 CC lib/event/scheduler_static.o 00:05:11.367 CC lib/event/log_rpc.o 00:05:11.367 CC lib/bdev/bdev.o 00:05:11.367 CC lib/bdev/bdev_rpc.o 00:05:11.624 SO libspdk_nvme.so.15.0 00:05:11.624 CC lib/bdev/bdev_zone.o 00:05:11.624 CC lib/bdev/part.o 00:05:11.624 CC lib/bdev/scsi_nvme.o 00:05:11.624 LIB libspdk_fsdev.a 00:05:11.624 SO libspdk_fsdev.so.2.0 00:05:11.882 SYMLINK libspdk_nvme.so 00:05:11.882 LIB libspdk_event.a 00:05:11.882 SYMLINK libspdk_fsdev.so 00:05:11.882 SO libspdk_event.so.14.0 00:05:11.882 SYMLINK libspdk_event.so 00:05:12.139 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:12.706 LIB libspdk_fuse_dispatcher.a 00:05:12.706 SO libspdk_fuse_dispatcher.so.1.0 00:05:12.706 SYMLINK libspdk_fuse_dispatcher.so 00:05:12.965 LIB libspdk_blob.a 00:05:13.224 SO libspdk_blob.so.12.0 00:05:13.224 SYMLINK libspdk_blob.so 00:05:13.482 CC lib/blobfs/blobfs.o 00:05:13.482 CC lib/blobfs/tree.o 00:05:13.482 CC lib/lvol/lvol.o 00:05:14.419 LIB libspdk_blobfs.a 00:05:14.419 LIB libspdk_bdev.a 00:05:14.419 SO libspdk_blobfs.so.11.0 00:05:14.419 SO libspdk_bdev.so.17.0 00:05:14.419 LIB libspdk_lvol.a 00:05:14.419 SYMLINK libspdk_blobfs.so 00:05:14.419 SO libspdk_lvol.so.11.0 00:05:14.419 SYMLINK libspdk_bdev.so 00:05:14.419 SYMLINK libspdk_lvol.so 00:05:14.679 CC lib/scsi/dev.o 00:05:14.679 CC lib/scsi/lun.o 00:05:14.679 CC lib/scsi/scsi.o 00:05:14.679 CC lib/scsi/port.o 00:05:14.679 CC lib/nvmf/ctrlr.o 00:05:14.679 CC lib/scsi/scsi_bdev.o 00:05:14.679 CC lib/scsi/scsi_pr.o 00:05:14.679 CC lib/ublk/ublk.o 00:05:14.679 CC lib/ftl/ftl_core.o 00:05:14.679 CC lib/nbd/nbd.o 00:05:14.679 CC lib/nbd/nbd_rpc.o 00:05:14.938 CC lib/scsi/scsi_rpc.o 00:05:14.938 CC lib/scsi/task.o 00:05:14.938 CC lib/nvmf/ctrlr_discovery.o 00:05:14.938 CC lib/nvmf/ctrlr_bdev.o 00:05:14.938 CC lib/nvmf/subsystem.o 00:05:14.938 LIB libspdk_nbd.a 00:05:15.197 SO libspdk_nbd.so.7.0 00:05:15.197 CC lib/nvmf/nvmf.o 00:05:15.197 CC lib/ftl/ftl_init.o 00:05:15.197 CC lib/ftl/ftl_layout.o 00:05:15.197 SYMLINK libspdk_nbd.so 00:05:15.197 CC lib/ftl/ftl_debug.o 00:05:15.197 LIB libspdk_scsi.a 00:05:15.197 SO libspdk_scsi.so.9.0 00:05:15.455 CC lib/ftl/ftl_io.o 00:05:15.455 SYMLINK libspdk_scsi.so 00:05:15.455 CC lib/ftl/ftl_sb.o 00:05:15.455 CC lib/ublk/ublk_rpc.o 00:05:15.455 CC lib/ftl/ftl_l2p.o 00:05:15.455 CC lib/ftl/ftl_l2p_flat.o 00:05:15.455 CC lib/ftl/ftl_nv_cache.o 00:05:15.455 LIB libspdk_ublk.a 00:05:15.714 CC lib/ftl/ftl_band.o 00:05:15.714 SO libspdk_ublk.so.3.0 00:05:15.714 CC lib/ftl/ftl_band_ops.o 00:05:15.714 CC lib/ftl/ftl_writer.o 00:05:15.714 SYMLINK libspdk_ublk.so 00:05:15.714 CC lib/ftl/ftl_rq.o 00:05:15.973 CC lib/iscsi/conn.o 00:05:15.973 CC lib/vhost/vhost.o 00:05:15.973 CC lib/iscsi/init_grp.o 00:05:15.973 CC lib/iscsi/iscsi.o 00:05:15.973 CC lib/iscsi/param.o 00:05:15.973 CC lib/vhost/vhost_rpc.o 00:05:15.973 CC lib/nvmf/nvmf_rpc.o 00:05:16.232 CC lib/nvmf/transport.o 00:05:16.232 CC lib/nvmf/tcp.o 00:05:16.490 CC lib/vhost/vhost_scsi.o 00:05:16.490 CC lib/iscsi/portal_grp.o 00:05:16.490 CC lib/ftl/ftl_reloc.o 00:05:16.490 CC lib/nvmf/stubs.o 00:05:16.748 CC lib/nvmf/mdns_server.o 00:05:16.748 CC lib/ftl/ftl_l2p_cache.o 00:05:17.006 CC lib/ftl/ftl_p2l.o 00:05:17.006 CC lib/ftl/ftl_p2l_log.o 00:05:17.006 CC lib/ftl/mngt/ftl_mngt.o 00:05:17.006 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:17.006 CC lib/nvmf/rdma.o 00:05:17.264 CC lib/iscsi/tgt_node.o 00:05:17.264 CC lib/iscsi/iscsi_subsystem.o 00:05:17.264 CC lib/vhost/vhost_blk.o 00:05:17.264 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:17.264 CC lib/iscsi/iscsi_rpc.o 00:05:17.264 CC lib/iscsi/task.o 00:05:17.522 CC lib/vhost/rte_vhost_user.o 00:05:17.522 CC lib/nvmf/auth.o 00:05:17.522 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:17.522 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:17.779 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:17.779 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:17.779 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:17.779 LIB libspdk_iscsi.a 00:05:17.779 SO libspdk_iscsi.so.8.0 00:05:18.037 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:18.037 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:18.037 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:18.037 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:18.037 SYMLINK libspdk_iscsi.so 00:05:18.037 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:18.037 CC lib/ftl/utils/ftl_conf.o 00:05:18.295 CC lib/ftl/utils/ftl_md.o 00:05:18.295 CC lib/ftl/utils/ftl_mempool.o 00:05:18.295 CC lib/ftl/utils/ftl_bitmap.o 00:05:18.295 CC lib/ftl/utils/ftl_property.o 00:05:18.295 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:18.295 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:18.295 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:18.295 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:18.295 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:18.295 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:18.554 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:18.554 LIB libspdk_vhost.a 00:05:18.554 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:18.554 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:18.554 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:18.554 SO libspdk_vhost.so.8.0 00:05:18.554 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:18.554 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:18.554 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:18.554 CC lib/ftl/base/ftl_base_dev.o 00:05:18.812 SYMLINK libspdk_vhost.so 00:05:18.812 CC lib/ftl/base/ftl_base_bdev.o 00:05:18.813 CC lib/ftl/ftl_trace.o 00:05:19.071 LIB libspdk_ftl.a 00:05:19.329 LIB libspdk_nvmf.a 00:05:19.329 SO libspdk_ftl.so.9.0 00:05:19.329 SO libspdk_nvmf.so.20.0 00:05:19.587 SYMLINK libspdk_ftl.so 00:05:19.587 SYMLINK libspdk_nvmf.so 00:05:19.846 CC module/env_dpdk/env_dpdk_rpc.o 00:05:19.846 CC module/accel/ioat/accel_ioat.o 00:05:19.846 CC module/keyring/linux/keyring.o 00:05:19.846 CC module/accel/error/accel_error.o 00:05:19.846 CC module/fsdev/aio/fsdev_aio.o 00:05:19.846 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:19.846 CC module/keyring/file/keyring.o 00:05:19.846 CC module/sock/posix/posix.o 00:05:19.846 CC module/blob/bdev/blob_bdev.o 00:05:19.846 CC module/sock/uring/uring.o 00:05:20.105 LIB libspdk_env_dpdk_rpc.a 00:05:20.105 SO libspdk_env_dpdk_rpc.so.6.0 00:05:20.105 SYMLINK libspdk_env_dpdk_rpc.so 00:05:20.105 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:20.105 CC module/keyring/linux/keyring_rpc.o 00:05:20.105 CC module/keyring/file/keyring_rpc.o 00:05:20.105 CC module/accel/ioat/accel_ioat_rpc.o 00:05:20.105 CC module/accel/error/accel_error_rpc.o 00:05:20.105 LIB libspdk_scheduler_dynamic.a 00:05:20.105 SO libspdk_scheduler_dynamic.so.4.0 00:05:20.364 LIB libspdk_blob_bdev.a 00:05:20.364 LIB libspdk_keyring_linux.a 00:05:20.364 SYMLINK libspdk_scheduler_dynamic.so 00:05:20.364 CC module/fsdev/aio/linux_aio_mgr.o 00:05:20.364 LIB libspdk_keyring_file.a 00:05:20.364 SO libspdk_blob_bdev.so.12.0 00:05:20.364 SO libspdk_keyring_linux.so.1.0 00:05:20.364 SO libspdk_keyring_file.so.2.0 00:05:20.364 LIB libspdk_accel_ioat.a 00:05:20.364 LIB libspdk_accel_error.a 00:05:20.364 SYMLINK libspdk_blob_bdev.so 00:05:20.364 SYMLINK libspdk_keyring_linux.so 00:05:20.364 SYMLINK libspdk_keyring_file.so 00:05:20.364 SO libspdk_accel_ioat.so.6.0 00:05:20.364 SO libspdk_accel_error.so.2.0 00:05:20.364 SYMLINK libspdk_accel_ioat.so 00:05:20.364 SYMLINK libspdk_accel_error.so 00:05:20.364 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:20.623 CC module/accel/dsa/accel_dsa.o 00:05:20.623 CC module/accel/iaa/accel_iaa.o 00:05:20.623 CC module/scheduler/gscheduler/gscheduler.o 00:05:20.623 CC module/blobfs/bdev/blobfs_bdev.o 00:05:20.623 CC module/bdev/delay/vbdev_delay.o 00:05:20.623 LIB libspdk_scheduler_dpdk_governor.a 00:05:20.623 LIB libspdk_fsdev_aio.a 00:05:20.623 CC module/bdev/error/vbdev_error.o 00:05:20.623 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:20.623 SO libspdk_fsdev_aio.so.1.0 00:05:20.623 LIB libspdk_sock_uring.a 00:05:20.623 LIB libspdk_sock_posix.a 00:05:20.623 SO libspdk_sock_uring.so.5.0 00:05:20.623 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:20.623 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:20.623 SO libspdk_sock_posix.so.6.0 00:05:20.882 SYMLINK libspdk_fsdev_aio.so 00:05:20.882 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:20.882 LIB libspdk_scheduler_gscheduler.a 00:05:20.882 SYMLINK libspdk_sock_uring.so 00:05:20.882 CC module/bdev/error/vbdev_error_rpc.o 00:05:20.882 SO libspdk_scheduler_gscheduler.so.4.0 00:05:20.882 SYMLINK libspdk_sock_posix.so 00:05:20.882 CC module/accel/iaa/accel_iaa_rpc.o 00:05:20.882 SYMLINK libspdk_scheduler_gscheduler.so 00:05:20.882 CC module/accel/dsa/accel_dsa_rpc.o 00:05:20.882 LIB libspdk_blobfs_bdev.a 00:05:20.882 LIB libspdk_bdev_error.a 00:05:20.882 LIB libspdk_accel_iaa.a 00:05:20.882 CC module/bdev/gpt/gpt.o 00:05:20.882 SO libspdk_blobfs_bdev.so.6.0 00:05:20.882 LIB libspdk_bdev_delay.a 00:05:20.882 CC module/bdev/lvol/vbdev_lvol.o 00:05:20.882 SO libspdk_accel_iaa.so.3.0 00:05:20.882 SO libspdk_bdev_error.so.6.0 00:05:21.141 LIB libspdk_accel_dsa.a 00:05:21.141 SO libspdk_bdev_delay.so.6.0 00:05:21.141 CC module/bdev/malloc/bdev_malloc.o 00:05:21.141 SYMLINK libspdk_blobfs_bdev.so 00:05:21.141 SO libspdk_accel_dsa.so.5.0 00:05:21.141 CC module/bdev/gpt/vbdev_gpt.o 00:05:21.141 SYMLINK libspdk_accel_iaa.so 00:05:21.141 SYMLINK libspdk_bdev_error.so 00:05:21.141 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:21.141 CC module/bdev/null/bdev_null.o 00:05:21.141 SYMLINK libspdk_bdev_delay.so 00:05:21.141 CC module/bdev/null/bdev_null_rpc.o 00:05:21.141 CC module/bdev/nvme/bdev_nvme.o 00:05:21.141 SYMLINK libspdk_accel_dsa.so 00:05:21.141 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:21.141 CC module/bdev/nvme/nvme_rpc.o 00:05:21.141 CC module/bdev/passthru/vbdev_passthru.o 00:05:21.141 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:21.399 CC module/bdev/nvme/bdev_mdns_client.o 00:05:21.399 LIB libspdk_bdev_gpt.a 00:05:21.399 LIB libspdk_bdev_null.a 00:05:21.399 SO libspdk_bdev_gpt.so.6.0 00:05:21.399 SO libspdk_bdev_null.so.6.0 00:05:21.399 CC module/bdev/nvme/vbdev_opal.o 00:05:21.399 LIB libspdk_bdev_malloc.a 00:05:21.399 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:21.399 SYMLINK libspdk_bdev_gpt.so 00:05:21.399 SYMLINK libspdk_bdev_null.so 00:05:21.399 SO libspdk_bdev_malloc.so.6.0 00:05:21.399 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:21.657 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:21.657 LIB libspdk_bdev_passthru.a 00:05:21.657 SYMLINK libspdk_bdev_malloc.so 00:05:21.657 SO libspdk_bdev_passthru.so.6.0 00:05:21.657 SYMLINK libspdk_bdev_passthru.so 00:05:21.657 CC module/bdev/split/vbdev_split.o 00:05:21.657 CC module/bdev/split/vbdev_split_rpc.o 00:05:21.657 CC module/bdev/raid/bdev_raid.o 00:05:21.657 CC module/bdev/raid/bdev_raid_rpc.o 00:05:21.657 CC module/bdev/raid/bdev_raid_sb.o 00:05:21.915 CC module/bdev/raid/raid0.o 00:05:21.915 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:21.915 CC module/bdev/uring/bdev_uring.o 00:05:21.915 CC module/bdev/uring/bdev_uring_rpc.o 00:05:21.915 LIB libspdk_bdev_lvol.a 00:05:21.915 LIB libspdk_bdev_split.a 00:05:21.915 SO libspdk_bdev_lvol.so.6.0 00:05:21.915 SO libspdk_bdev_split.so.6.0 00:05:21.915 CC module/bdev/raid/raid1.o 00:05:21.915 CC module/bdev/raid/concat.o 00:05:22.173 SYMLINK libspdk_bdev_lvol.so 00:05:22.173 SYMLINK libspdk_bdev_split.so 00:05:22.173 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:22.173 CC module/bdev/aio/bdev_aio.o 00:05:22.173 CC module/bdev/ftl/bdev_ftl.o 00:05:22.173 LIB libspdk_bdev_zone_block.a 00:05:22.173 LIB libspdk_bdev_uring.a 00:05:22.173 CC module/bdev/iscsi/bdev_iscsi.o 00:05:22.173 SO libspdk_bdev_zone_block.so.6.0 00:05:22.173 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:22.431 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:22.431 SO libspdk_bdev_uring.so.6.0 00:05:22.431 SYMLINK libspdk_bdev_zone_block.so 00:05:22.431 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:22.431 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:22.431 SYMLINK libspdk_bdev_uring.so 00:05:22.431 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:22.431 CC module/bdev/aio/bdev_aio_rpc.o 00:05:22.431 LIB libspdk_bdev_ftl.a 00:05:22.689 SO libspdk_bdev_ftl.so.6.0 00:05:22.689 SYMLINK libspdk_bdev_ftl.so 00:05:22.689 LIB libspdk_bdev_aio.a 00:05:22.689 LIB libspdk_bdev_iscsi.a 00:05:22.689 SO libspdk_bdev_aio.so.6.0 00:05:22.689 SO libspdk_bdev_iscsi.so.6.0 00:05:22.689 SYMLINK libspdk_bdev_aio.so 00:05:22.689 LIB libspdk_bdev_raid.a 00:05:22.689 SYMLINK libspdk_bdev_iscsi.so 00:05:22.689 SO libspdk_bdev_raid.so.6.0 00:05:22.948 LIB libspdk_bdev_virtio.a 00:05:22.948 SYMLINK libspdk_bdev_raid.so 00:05:22.948 SO libspdk_bdev_virtio.so.6.0 00:05:22.948 SYMLINK libspdk_bdev_virtio.so 00:05:23.923 LIB libspdk_bdev_nvme.a 00:05:23.923 SO libspdk_bdev_nvme.so.7.1 00:05:23.923 SYMLINK libspdk_bdev_nvme.so 00:05:24.197 CC module/event/subsystems/iobuf/iobuf.o 00:05:24.197 CC module/event/subsystems/vmd/vmd.o 00:05:24.197 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:24.197 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:24.198 CC module/event/subsystems/fsdev/fsdev.o 00:05:24.198 CC module/event/subsystems/keyring/keyring.o 00:05:24.198 CC module/event/subsystems/sock/sock.o 00:05:24.198 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:24.198 CC module/event/subsystems/scheduler/scheduler.o 00:05:24.456 LIB libspdk_event_fsdev.a 00:05:24.456 LIB libspdk_event_scheduler.a 00:05:24.456 LIB libspdk_event_keyring.a 00:05:24.456 LIB libspdk_event_vhost_blk.a 00:05:24.456 LIB libspdk_event_vmd.a 00:05:24.456 LIB libspdk_event_sock.a 00:05:24.456 SO libspdk_event_fsdev.so.1.0 00:05:24.456 SO libspdk_event_scheduler.so.4.0 00:05:24.456 SO libspdk_event_vhost_blk.so.3.0 00:05:24.456 SO libspdk_event_keyring.so.1.0 00:05:24.456 LIB libspdk_event_iobuf.a 00:05:24.456 SO libspdk_event_vmd.so.6.0 00:05:24.456 SO libspdk_event_sock.so.5.0 00:05:24.456 SO libspdk_event_iobuf.so.3.0 00:05:24.456 SYMLINK libspdk_event_scheduler.so 00:05:24.456 SYMLINK libspdk_event_vhost_blk.so 00:05:24.456 SYMLINK libspdk_event_fsdev.so 00:05:24.456 SYMLINK libspdk_event_keyring.so 00:05:24.456 SYMLINK libspdk_event_vmd.so 00:05:24.456 SYMLINK libspdk_event_sock.so 00:05:24.715 SYMLINK libspdk_event_iobuf.so 00:05:24.973 CC module/event/subsystems/accel/accel.o 00:05:24.973 LIB libspdk_event_accel.a 00:05:24.973 SO libspdk_event_accel.so.6.0 00:05:25.232 SYMLINK libspdk_event_accel.so 00:05:25.492 CC module/event/subsystems/bdev/bdev.o 00:05:25.492 LIB libspdk_event_bdev.a 00:05:25.751 SO libspdk_event_bdev.so.6.0 00:05:25.751 SYMLINK libspdk_event_bdev.so 00:05:26.011 CC module/event/subsystems/scsi/scsi.o 00:05:26.011 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:26.011 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:26.011 CC module/event/subsystems/nbd/nbd.o 00:05:26.011 CC module/event/subsystems/ublk/ublk.o 00:05:26.011 LIB libspdk_event_ublk.a 00:05:26.011 LIB libspdk_event_scsi.a 00:05:26.011 LIB libspdk_event_nbd.a 00:05:26.270 SO libspdk_event_ublk.so.3.0 00:05:26.270 SO libspdk_event_scsi.so.6.0 00:05:26.270 SO libspdk_event_nbd.so.6.0 00:05:26.270 SYMLINK libspdk_event_ublk.so 00:05:26.270 SYMLINK libspdk_event_scsi.so 00:05:26.270 SYMLINK libspdk_event_nbd.so 00:05:26.270 LIB libspdk_event_nvmf.a 00:05:26.270 SO libspdk_event_nvmf.so.6.0 00:05:26.270 SYMLINK libspdk_event_nvmf.so 00:05:26.529 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:26.529 CC module/event/subsystems/iscsi/iscsi.o 00:05:26.529 LIB libspdk_event_vhost_scsi.a 00:05:26.529 LIB libspdk_event_iscsi.a 00:05:26.788 SO libspdk_event_vhost_scsi.so.3.0 00:05:26.788 SO libspdk_event_iscsi.so.6.0 00:05:26.788 SYMLINK libspdk_event_vhost_scsi.so 00:05:26.788 SYMLINK libspdk_event_iscsi.so 00:05:26.788 SO libspdk.so.6.0 00:05:26.788 SYMLINK libspdk.so 00:05:27.047 TEST_HEADER include/spdk/accel.h 00:05:27.047 CC test/rpc_client/rpc_client_test.o 00:05:27.047 CXX app/trace/trace.o 00:05:27.047 TEST_HEADER include/spdk/accel_module.h 00:05:27.047 TEST_HEADER include/spdk/assert.h 00:05:27.047 TEST_HEADER include/spdk/barrier.h 00:05:27.047 TEST_HEADER include/spdk/base64.h 00:05:27.047 TEST_HEADER include/spdk/bdev.h 00:05:27.306 TEST_HEADER include/spdk/bdev_module.h 00:05:27.306 TEST_HEADER include/spdk/bdev_zone.h 00:05:27.306 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:27.306 TEST_HEADER include/spdk/bit_array.h 00:05:27.306 TEST_HEADER include/spdk/bit_pool.h 00:05:27.306 TEST_HEADER include/spdk/blob_bdev.h 00:05:27.306 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:27.306 TEST_HEADER include/spdk/blobfs.h 00:05:27.306 TEST_HEADER include/spdk/blob.h 00:05:27.306 TEST_HEADER include/spdk/conf.h 00:05:27.306 TEST_HEADER include/spdk/config.h 00:05:27.306 TEST_HEADER include/spdk/cpuset.h 00:05:27.306 TEST_HEADER include/spdk/crc16.h 00:05:27.306 TEST_HEADER include/spdk/crc32.h 00:05:27.306 TEST_HEADER include/spdk/crc64.h 00:05:27.306 TEST_HEADER include/spdk/dif.h 00:05:27.306 TEST_HEADER include/spdk/dma.h 00:05:27.306 TEST_HEADER include/spdk/endian.h 00:05:27.306 TEST_HEADER include/spdk/env_dpdk.h 00:05:27.306 TEST_HEADER include/spdk/env.h 00:05:27.306 TEST_HEADER include/spdk/event.h 00:05:27.306 TEST_HEADER include/spdk/fd_group.h 00:05:27.306 TEST_HEADER include/spdk/fd.h 00:05:27.306 TEST_HEADER include/spdk/file.h 00:05:27.306 TEST_HEADER include/spdk/fsdev.h 00:05:27.306 TEST_HEADER include/spdk/fsdev_module.h 00:05:27.306 TEST_HEADER include/spdk/ftl.h 00:05:27.306 CC test/thread/poller_perf/poller_perf.o 00:05:27.306 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:27.306 TEST_HEADER include/spdk/gpt_spec.h 00:05:27.306 TEST_HEADER include/spdk/hexlify.h 00:05:27.306 CC examples/util/zipf/zipf.o 00:05:27.306 TEST_HEADER include/spdk/histogram_data.h 00:05:27.306 TEST_HEADER include/spdk/idxd.h 00:05:27.306 TEST_HEADER include/spdk/idxd_spec.h 00:05:27.306 CC examples/ioat/perf/perf.o 00:05:27.306 TEST_HEADER include/spdk/init.h 00:05:27.306 TEST_HEADER include/spdk/ioat.h 00:05:27.306 TEST_HEADER include/spdk/ioat_spec.h 00:05:27.306 TEST_HEADER include/spdk/iscsi_spec.h 00:05:27.306 TEST_HEADER include/spdk/json.h 00:05:27.306 TEST_HEADER include/spdk/jsonrpc.h 00:05:27.306 TEST_HEADER include/spdk/keyring.h 00:05:27.306 TEST_HEADER include/spdk/keyring_module.h 00:05:27.306 TEST_HEADER include/spdk/likely.h 00:05:27.306 TEST_HEADER include/spdk/log.h 00:05:27.306 CC test/app/bdev_svc/bdev_svc.o 00:05:27.306 TEST_HEADER include/spdk/lvol.h 00:05:27.306 CC test/dma/test_dma/test_dma.o 00:05:27.306 TEST_HEADER include/spdk/md5.h 00:05:27.306 TEST_HEADER include/spdk/memory.h 00:05:27.306 TEST_HEADER include/spdk/mmio.h 00:05:27.306 TEST_HEADER include/spdk/nbd.h 00:05:27.306 TEST_HEADER include/spdk/net.h 00:05:27.306 TEST_HEADER include/spdk/notify.h 00:05:27.306 TEST_HEADER include/spdk/nvme.h 00:05:27.306 TEST_HEADER include/spdk/nvme_intel.h 00:05:27.306 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:27.306 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:27.306 TEST_HEADER include/spdk/nvme_spec.h 00:05:27.306 TEST_HEADER include/spdk/nvme_zns.h 00:05:27.306 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:27.306 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:27.306 TEST_HEADER include/spdk/nvmf.h 00:05:27.306 TEST_HEADER include/spdk/nvmf_spec.h 00:05:27.306 TEST_HEADER include/spdk/nvmf_transport.h 00:05:27.306 TEST_HEADER include/spdk/opal.h 00:05:27.306 TEST_HEADER include/spdk/opal_spec.h 00:05:27.306 TEST_HEADER include/spdk/pci_ids.h 00:05:27.306 TEST_HEADER include/spdk/pipe.h 00:05:27.306 TEST_HEADER include/spdk/queue.h 00:05:27.306 TEST_HEADER include/spdk/reduce.h 00:05:27.306 TEST_HEADER include/spdk/rpc.h 00:05:27.306 TEST_HEADER include/spdk/scheduler.h 00:05:27.306 CC test/env/mem_callbacks/mem_callbacks.o 00:05:27.306 TEST_HEADER include/spdk/scsi.h 00:05:27.306 TEST_HEADER include/spdk/scsi_spec.h 00:05:27.306 TEST_HEADER include/spdk/sock.h 00:05:27.306 TEST_HEADER include/spdk/stdinc.h 00:05:27.306 TEST_HEADER include/spdk/string.h 00:05:27.306 TEST_HEADER include/spdk/thread.h 00:05:27.306 TEST_HEADER include/spdk/trace.h 00:05:27.306 TEST_HEADER include/spdk/trace_parser.h 00:05:27.306 TEST_HEADER include/spdk/tree.h 00:05:27.306 TEST_HEADER include/spdk/ublk.h 00:05:27.306 TEST_HEADER include/spdk/util.h 00:05:27.306 TEST_HEADER include/spdk/uuid.h 00:05:27.306 TEST_HEADER include/spdk/version.h 00:05:27.306 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:27.306 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:27.306 TEST_HEADER include/spdk/vhost.h 00:05:27.306 TEST_HEADER include/spdk/vmd.h 00:05:27.306 TEST_HEADER include/spdk/xor.h 00:05:27.306 LINK rpc_client_test 00:05:27.306 TEST_HEADER include/spdk/zipf.h 00:05:27.306 CXX test/cpp_headers/accel.o 00:05:27.565 LINK interrupt_tgt 00:05:27.565 LINK poller_perf 00:05:27.565 LINK zipf 00:05:27.565 LINK bdev_svc 00:05:27.565 CXX test/cpp_headers/accel_module.o 00:05:27.565 LINK ioat_perf 00:05:27.565 CXX test/cpp_headers/assert.o 00:05:27.565 CXX test/cpp_headers/barrier.o 00:05:27.565 LINK spdk_trace 00:05:27.565 CXX test/cpp_headers/base64.o 00:05:27.565 CXX test/cpp_headers/bdev.o 00:05:27.824 CC examples/ioat/verify/verify.o 00:05:27.824 LINK test_dma 00:05:27.824 CC test/app/histogram_perf/histogram_perf.o 00:05:27.824 CXX test/cpp_headers/bdev_module.o 00:05:27.824 CC app/trace_record/trace_record.o 00:05:27.824 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:27.824 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:27.824 CC app/nvmf_tgt/nvmf_main.o 00:05:28.084 CC examples/thread/thread/thread_ex.o 00:05:28.084 LINK histogram_perf 00:05:28.084 LINK mem_callbacks 00:05:28.084 LINK verify 00:05:28.084 CXX test/cpp_headers/bdev_zone.o 00:05:28.084 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:28.084 LINK nvmf_tgt 00:05:28.084 LINK spdk_trace_record 00:05:28.084 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:28.343 CC test/env/vtophys/vtophys.o 00:05:28.343 LINK thread 00:05:28.343 CXX test/cpp_headers/bit_array.o 00:05:28.343 LINK nvme_fuzz 00:05:28.343 CC test/event/event_perf/event_perf.o 00:05:28.343 CC test/event/reactor_perf/reactor_perf.o 00:05:28.343 CC test/event/reactor/reactor.o 00:05:28.343 LINK vtophys 00:05:28.343 CXX test/cpp_headers/bit_pool.o 00:05:28.602 CC app/iscsi_tgt/iscsi_tgt.o 00:05:28.602 LINK event_perf 00:05:28.602 LINK reactor_perf 00:05:28.602 LINK reactor 00:05:28.602 LINK vhost_fuzz 00:05:28.602 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:28.602 CXX test/cpp_headers/blob_bdev.o 00:05:28.602 CC examples/sock/hello_world/hello_sock.o 00:05:28.602 CC app/spdk_tgt/spdk_tgt.o 00:05:28.602 LINK iscsi_tgt 00:05:28.862 CC app/spdk_lspci/spdk_lspci.o 00:05:28.862 CC app/spdk_nvme_perf/perf.o 00:05:28.862 CC test/event/app_repeat/app_repeat.o 00:05:28.862 LINK env_dpdk_post_init 00:05:28.862 CXX test/cpp_headers/blobfs_bdev.o 00:05:28.862 CXX test/cpp_headers/blobfs.o 00:05:28.862 CC test/event/scheduler/scheduler.o 00:05:28.862 LINK spdk_lspci 00:05:28.862 LINK hello_sock 00:05:28.862 LINK spdk_tgt 00:05:29.122 LINK app_repeat 00:05:29.122 CXX test/cpp_headers/blob.o 00:05:29.122 CC test/env/memory/memory_ut.o 00:05:29.122 CC test/env/pci/pci_ut.o 00:05:29.122 CC test/app/jsoncat/jsoncat.o 00:05:29.122 CXX test/cpp_headers/conf.o 00:05:29.122 LINK scheduler 00:05:29.122 CC examples/vmd/lsvmd/lsvmd.o 00:05:29.381 CC app/spdk_nvme_identify/identify.o 00:05:29.381 LINK jsoncat 00:05:29.381 CXX test/cpp_headers/config.o 00:05:29.381 CXX test/cpp_headers/cpuset.o 00:05:29.381 LINK lsvmd 00:05:29.381 CC test/app/stub/stub.o 00:05:29.381 CC app/spdk_nvme_discover/discovery_aer.o 00:05:29.381 CXX test/cpp_headers/crc16.o 00:05:29.640 LINK pci_ut 00:05:29.640 LINK iscsi_fuzz 00:05:29.640 LINK stub 00:05:29.640 CC examples/vmd/led/led.o 00:05:29.640 CC app/spdk_top/spdk_top.o 00:05:29.640 CXX test/cpp_headers/crc32.o 00:05:29.640 LINK spdk_nvme_perf 00:05:29.640 LINK spdk_nvme_discover 00:05:29.898 CXX test/cpp_headers/crc64.o 00:05:29.898 LINK led 00:05:29.898 CXX test/cpp_headers/dif.o 00:05:29.898 CXX test/cpp_headers/dma.o 00:05:29.898 CC app/spdk_dd/spdk_dd.o 00:05:29.898 CXX test/cpp_headers/endian.o 00:05:29.898 CC app/vhost/vhost.o 00:05:30.157 CC test/nvme/aer/aer.o 00:05:30.157 CC test/nvme/reset/reset.o 00:05:30.157 LINK spdk_nvme_identify 00:05:30.157 CC examples/idxd/perf/perf.o 00:05:30.157 CXX test/cpp_headers/env_dpdk.o 00:05:30.157 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:30.157 LINK vhost 00:05:30.157 CXX test/cpp_headers/env.o 00:05:30.415 LINK reset 00:05:30.415 LINK aer 00:05:30.415 LINK memory_ut 00:05:30.415 CXX test/cpp_headers/event.o 00:05:30.415 CC test/nvme/sgl/sgl.o 00:05:30.416 LINK spdk_dd 00:05:30.416 CXX test/cpp_headers/fd_group.o 00:05:30.416 CC test/nvme/e2edp/nvme_dp.o 00:05:30.416 LINK idxd_perf 00:05:30.416 LINK spdk_top 00:05:30.416 LINK hello_fsdev 00:05:30.675 CC test/nvme/overhead/overhead.o 00:05:30.675 CXX test/cpp_headers/fd.o 00:05:30.675 CC test/nvme/err_injection/err_injection.o 00:05:30.675 CXX test/cpp_headers/file.o 00:05:30.675 LINK sgl 00:05:30.675 CC test/nvme/startup/startup.o 00:05:30.675 CC examples/accel/perf/accel_perf.o 00:05:30.933 CC app/fio/nvme/fio_plugin.o 00:05:30.933 LINK nvme_dp 00:05:30.933 CXX test/cpp_headers/fsdev.o 00:05:30.933 LINK err_injection 00:05:30.933 CC app/fio/bdev/fio_plugin.o 00:05:30.933 LINK overhead 00:05:30.933 CC test/nvme/reserve/reserve.o 00:05:30.933 CC test/nvme/simple_copy/simple_copy.o 00:05:30.933 LINK startup 00:05:30.933 CXX test/cpp_headers/fsdev_module.o 00:05:30.933 CXX test/cpp_headers/ftl.o 00:05:31.198 CC test/nvme/connect_stress/connect_stress.o 00:05:31.198 CC test/nvme/boot_partition/boot_partition.o 00:05:31.198 LINK reserve 00:05:31.198 LINK simple_copy 00:05:31.198 CC test/nvme/compliance/nvme_compliance.o 00:05:31.198 CXX test/cpp_headers/fuse_dispatcher.o 00:05:31.198 LINK connect_stress 00:05:31.198 LINK accel_perf 00:05:31.198 LINK boot_partition 00:05:31.198 CC test/nvme/fused_ordering/fused_ordering.o 00:05:31.460 LINK spdk_bdev 00:05:31.460 LINK spdk_nvme 00:05:31.460 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:31.460 CXX test/cpp_headers/gpt_spec.o 00:05:31.460 CC test/nvme/fdp/fdp.o 00:05:31.460 CC test/nvme/cuse/cuse.o 00:05:31.460 LINK fused_ordering 00:05:31.718 LINK doorbell_aers 00:05:31.718 LINK nvme_compliance 00:05:31.718 CXX test/cpp_headers/hexlify.o 00:05:31.718 CC examples/nvme/hello_world/hello_world.o 00:05:31.718 CC examples/blob/hello_world/hello_blob.o 00:05:31.718 CC test/accel/dif/dif.o 00:05:31.718 CC examples/bdev/hello_world/hello_bdev.o 00:05:31.718 CXX test/cpp_headers/histogram_data.o 00:05:31.718 LINK fdp 00:05:31.718 CC examples/bdev/bdevperf/bdevperf.o 00:05:31.977 LINK hello_world 00:05:31.977 LINK hello_blob 00:05:31.977 CXX test/cpp_headers/idxd.o 00:05:31.977 CC test/blobfs/mkfs/mkfs.o 00:05:31.977 LINK hello_bdev 00:05:31.977 CC test/lvol/esnap/esnap.o 00:05:31.977 CXX test/cpp_headers/idxd_spec.o 00:05:31.977 CC examples/nvme/reconnect/reconnect.o 00:05:32.235 CXX test/cpp_headers/init.o 00:05:32.235 LINK mkfs 00:05:32.235 CXX test/cpp_headers/ioat.o 00:05:32.235 CC examples/blob/cli/blobcli.o 00:05:32.235 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:32.235 LINK dif 00:05:32.494 CXX test/cpp_headers/ioat_spec.o 00:05:32.494 CXX test/cpp_headers/iscsi_spec.o 00:05:32.494 LINK reconnect 00:05:32.494 CC examples/nvme/arbitration/arbitration.o 00:05:32.494 CXX test/cpp_headers/json.o 00:05:32.753 CC examples/nvme/hotplug/hotplug.o 00:05:32.753 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:32.753 LINK bdevperf 00:05:32.753 CC examples/nvme/abort/abort.o 00:05:32.753 LINK blobcli 00:05:32.753 LINK nvme_manage 00:05:32.753 CXX test/cpp_headers/jsonrpc.o 00:05:32.753 LINK arbitration 00:05:32.753 LINK cuse 00:05:32.753 LINK cmb_copy 00:05:33.012 LINK hotplug 00:05:33.012 CXX test/cpp_headers/keyring.o 00:05:33.012 CXX test/cpp_headers/keyring_module.o 00:05:33.012 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:33.012 CXX test/cpp_headers/likely.o 00:05:33.012 CXX test/cpp_headers/log.o 00:05:33.012 CXX test/cpp_headers/lvol.o 00:05:33.012 CXX test/cpp_headers/md5.o 00:05:33.012 LINK abort 00:05:33.012 CXX test/cpp_headers/memory.o 00:05:33.012 CXX test/cpp_headers/mmio.o 00:05:33.271 CC test/bdev/bdevio/bdevio.o 00:05:33.271 CXX test/cpp_headers/nbd.o 00:05:33.271 LINK pmr_persistence 00:05:33.271 CXX test/cpp_headers/net.o 00:05:33.271 CXX test/cpp_headers/notify.o 00:05:33.271 CXX test/cpp_headers/nvme.o 00:05:33.271 CXX test/cpp_headers/nvme_intel.o 00:05:33.271 CXX test/cpp_headers/nvme_ocssd.o 00:05:33.271 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:33.271 CXX test/cpp_headers/nvme_spec.o 00:05:33.271 CXX test/cpp_headers/nvme_zns.o 00:05:33.271 CXX test/cpp_headers/nvmf_cmd.o 00:05:33.530 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:33.530 CXX test/cpp_headers/nvmf.o 00:05:33.530 CXX test/cpp_headers/nvmf_spec.o 00:05:33.530 CXX test/cpp_headers/nvmf_transport.o 00:05:33.530 CXX test/cpp_headers/opal.o 00:05:33.530 LINK bdevio 00:05:33.530 CC examples/nvmf/nvmf/nvmf.o 00:05:33.530 CXX test/cpp_headers/opal_spec.o 00:05:33.530 CXX test/cpp_headers/pci_ids.o 00:05:33.530 CXX test/cpp_headers/pipe.o 00:05:33.530 CXX test/cpp_headers/queue.o 00:05:33.788 CXX test/cpp_headers/reduce.o 00:05:33.788 CXX test/cpp_headers/rpc.o 00:05:33.788 CXX test/cpp_headers/scheduler.o 00:05:33.788 CXX test/cpp_headers/scsi.o 00:05:33.788 CXX test/cpp_headers/scsi_spec.o 00:05:33.788 CXX test/cpp_headers/sock.o 00:05:33.788 CXX test/cpp_headers/stdinc.o 00:05:33.788 CXX test/cpp_headers/string.o 00:05:33.788 CXX test/cpp_headers/thread.o 00:05:33.788 CXX test/cpp_headers/trace.o 00:05:33.788 CXX test/cpp_headers/trace_parser.o 00:05:33.788 LINK nvmf 00:05:34.047 CXX test/cpp_headers/tree.o 00:05:34.047 CXX test/cpp_headers/ublk.o 00:05:34.047 CXX test/cpp_headers/util.o 00:05:34.047 CXX test/cpp_headers/uuid.o 00:05:34.047 CXX test/cpp_headers/version.o 00:05:34.047 CXX test/cpp_headers/vfio_user_pci.o 00:05:34.047 CXX test/cpp_headers/vfio_user_spec.o 00:05:34.047 CXX test/cpp_headers/vhost.o 00:05:34.047 CXX test/cpp_headers/vmd.o 00:05:34.047 CXX test/cpp_headers/xor.o 00:05:34.047 CXX test/cpp_headers/zipf.o 00:05:37.338 LINK esnap 00:05:37.338 00:05:37.338 real 1m28.986s 00:05:37.338 user 7m4.055s 00:05:37.338 sys 1m8.587s 00:05:37.338 16:41:00 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:37.338 ************************************ 00:05:37.338 END TEST make 00:05:37.338 ************************************ 00:05:37.338 16:41:00 make -- common/autotest_common.sh@10 -- $ set +x 00:05:37.338 16:41:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:37.338 16:41:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:37.338 16:41:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:37.338 16:41:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:37.338 16:41:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:37.338 16:41:00 -- pm/common@44 -- $ pid=6033 00:05:37.338 16:41:00 -- pm/common@50 -- $ kill -TERM 6033 00:05:37.338 16:41:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:37.338 16:41:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:37.338 16:41:00 -- pm/common@44 -- $ pid=6034 00:05:37.338 16:41:00 -- pm/common@50 -- $ kill -TERM 6034 00:05:37.338 16:41:00 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:37.338 16:41:00 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:37.338 16:41:01 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.338 16:41:01 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.338 16:41:01 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.598 16:41:01 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.598 16:41:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.598 16:41:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.598 16:41:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.598 16:41:01 -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.598 16:41:01 -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.598 16:41:01 -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.598 16:41:01 -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.598 16:41:01 -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.598 16:41:01 -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.598 16:41:01 -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.598 16:41:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.598 16:41:01 -- scripts/common.sh@344 -- # case "$op" in 00:05:37.598 16:41:01 -- scripts/common.sh@345 -- # : 1 00:05:37.598 16:41:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.598 16:41:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.598 16:41:01 -- scripts/common.sh@365 -- # decimal 1 00:05:37.598 16:41:01 -- scripts/common.sh@353 -- # local d=1 00:05:37.598 16:41:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.598 16:41:01 -- scripts/common.sh@355 -- # echo 1 00:05:37.598 16:41:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.598 16:41:01 -- scripts/common.sh@366 -- # decimal 2 00:05:37.598 16:41:01 -- scripts/common.sh@353 -- # local d=2 00:05:37.598 16:41:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.598 16:41:01 -- scripts/common.sh@355 -- # echo 2 00:05:37.598 16:41:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.598 16:41:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.598 16:41:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.598 16:41:01 -- scripts/common.sh@368 -- # return 0 00:05:37.598 16:41:01 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.598 16:41:01 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.598 --rc genhtml_branch_coverage=1 00:05:37.598 --rc genhtml_function_coverage=1 00:05:37.598 --rc genhtml_legend=1 00:05:37.598 --rc geninfo_all_blocks=1 00:05:37.598 --rc geninfo_unexecuted_blocks=1 00:05:37.598 00:05:37.598 ' 00:05:37.598 16:41:01 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.598 --rc genhtml_branch_coverage=1 00:05:37.598 --rc genhtml_function_coverage=1 00:05:37.598 --rc genhtml_legend=1 00:05:37.598 --rc geninfo_all_blocks=1 00:05:37.598 --rc geninfo_unexecuted_blocks=1 00:05:37.598 00:05:37.598 ' 00:05:37.598 16:41:01 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.598 --rc genhtml_branch_coverage=1 00:05:37.598 --rc genhtml_function_coverage=1 00:05:37.598 --rc genhtml_legend=1 00:05:37.598 --rc geninfo_all_blocks=1 00:05:37.598 --rc geninfo_unexecuted_blocks=1 00:05:37.598 00:05:37.598 ' 00:05:37.598 16:41:01 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.598 --rc genhtml_branch_coverage=1 00:05:37.598 --rc genhtml_function_coverage=1 00:05:37.598 --rc genhtml_legend=1 00:05:37.598 --rc geninfo_all_blocks=1 00:05:37.598 --rc geninfo_unexecuted_blocks=1 00:05:37.598 00:05:37.598 ' 00:05:37.598 16:41:01 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:37.598 16:41:01 -- nvmf/common.sh@7 -- # uname -s 00:05:37.598 16:41:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.598 16:41:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.598 16:41:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.598 16:41:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.598 16:41:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.598 16:41:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.598 16:41:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.598 16:41:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.598 16:41:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.598 16:41:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.598 16:41:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:05:37.598 16:41:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:05:37.598 16:41:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.598 16:41:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.598 16:41:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:37.598 16:41:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.598 16:41:01 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:37.598 16:41:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:37.598 16:41:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.598 16:41:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.598 16:41:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.599 16:41:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.599 16:41:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.599 16:41:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.599 16:41:01 -- paths/export.sh@5 -- # export PATH 00:05:37.599 16:41:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.599 16:41:01 -- nvmf/common.sh@51 -- # : 0 00:05:37.599 16:41:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:37.599 16:41:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:37.599 16:41:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:37.599 16:41:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.599 16:41:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.599 16:41:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:37.599 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:37.599 16:41:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:37.599 16:41:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:37.599 16:41:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:37.599 16:41:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:37.599 16:41:01 -- spdk/autotest.sh@32 -- # uname -s 00:05:37.599 16:41:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:37.599 16:41:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:37.599 16:41:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:37.599 16:41:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:37.599 16:41:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:37.599 16:41:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:37.599 16:41:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:37.599 16:41:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:37.599 16:41:01 -- spdk/autotest.sh@48 -- # udevadm_pid=68528 00:05:37.599 16:41:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:37.599 16:41:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:37.599 16:41:01 -- pm/common@17 -- # local monitor 00:05:37.599 16:41:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:37.599 16:41:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:37.599 16:41:01 -- pm/common@21 -- # date +%s 00:05:37.599 16:41:01 -- pm/common@25 -- # sleep 1 00:05:37.599 16:41:01 -- pm/common@21 -- # date +%s 00:05:37.599 16:41:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732898461 00:05:37.599 16:41:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732898461 00:05:37.599 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732898461_collect-cpu-load.pm.log 00:05:37.599 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732898461_collect-vmstat.pm.log 00:05:38.536 16:41:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:38.536 16:41:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:38.536 16:41:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.536 16:41:02 -- common/autotest_common.sh@10 -- # set +x 00:05:38.536 16:41:02 -- spdk/autotest.sh@59 -- # create_test_list 00:05:38.536 16:41:02 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:38.536 16:41:02 -- common/autotest_common.sh@10 -- # set +x 00:05:38.536 16:41:02 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:38.536 16:41:02 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:38.536 16:41:02 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:38.536 16:41:02 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:38.536 16:41:02 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:38.536 16:41:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:38.536 16:41:02 -- common/autotest_common.sh@1457 -- # uname 00:05:38.795 16:41:02 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:38.795 16:41:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:38.795 16:41:02 -- common/autotest_common.sh@1477 -- # uname 00:05:38.795 16:41:02 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:38.795 16:41:02 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:38.795 16:41:02 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:38.795 lcov: LCOV version 1.15 00:05:38.795 16:41:02 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:53.677 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:53.677 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:08.631 16:41:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:08.631 16:41:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.631 16:41:31 -- common/autotest_common.sh@10 -- # set +x 00:06:08.631 16:41:31 -- spdk/autotest.sh@78 -- # rm -f 00:06:08.631 16:41:31 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:08.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:08.631 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:08.631 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:08.631 16:41:31 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:08.631 16:41:31 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:08.631 16:41:31 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:08.631 16:41:31 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:08.631 16:41:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:08.631 16:41:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:08.631 16:41:31 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:08.631 16:41:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:08.631 16:41:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:08.631 16:41:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:08.631 16:41:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:08.631 16:41:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:08.631 16:41:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:08.631 16:41:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:08.631 16:41:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:08.631 16:41:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:06:08.631 16:41:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:08.631 16:41:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:08.631 16:41:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:08.631 16:41:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:08.631 16:41:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:06:08.631 16:41:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:08.631 16:41:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:08.631 16:41:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:08.631 16:41:31 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:08.631 16:41:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:08.631 16:41:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:08.631 16:41:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:08.631 16:41:31 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:08.631 16:41:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:08.631 No valid GPT data, bailing 00:06:08.631 16:41:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:08.631 16:41:31 -- scripts/common.sh@394 -- # pt= 00:06:08.631 16:41:31 -- scripts/common.sh@395 -- # return 1 00:06:08.631 16:41:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:08.631 1+0 records in 00:06:08.631 1+0 records out 00:06:08.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491994 s, 213 MB/s 00:06:08.631 16:41:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:08.631 16:41:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:08.631 16:41:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:08.631 16:41:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:08.631 16:41:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:08.631 No valid GPT data, bailing 00:06:08.631 16:41:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:08.631 16:41:32 -- scripts/common.sh@394 -- # pt= 00:06:08.631 16:41:32 -- scripts/common.sh@395 -- # return 1 00:06:08.631 16:41:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:08.631 1+0 records in 00:06:08.631 1+0 records out 00:06:08.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440214 s, 238 MB/s 00:06:08.631 16:41:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:08.631 16:41:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:08.631 16:41:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:08.631 16:41:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:08.631 16:41:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:08.631 No valid GPT data, bailing 00:06:08.631 16:41:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:08.631 16:41:32 -- scripts/common.sh@394 -- # pt= 00:06:08.631 16:41:32 -- scripts/common.sh@395 -- # return 1 00:06:08.631 16:41:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:08.631 1+0 records in 00:06:08.631 1+0 records out 00:06:08.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423202 s, 248 MB/s 00:06:08.631 16:41:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:08.631 16:41:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:08.631 16:41:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:08.631 16:41:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:08.631 16:41:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:08.631 No valid GPT data, bailing 00:06:08.631 16:41:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:08.631 16:41:32 -- scripts/common.sh@394 -- # pt= 00:06:08.631 16:41:32 -- scripts/common.sh@395 -- # return 1 00:06:08.631 16:41:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:08.631 1+0 records in 00:06:08.631 1+0 records out 00:06:08.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480385 s, 218 MB/s 00:06:08.631 16:41:32 -- spdk/autotest.sh@105 -- # sync 00:06:08.631 16:41:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:08.631 16:41:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:08.631 16:41:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:10.535 16:41:34 -- spdk/autotest.sh@111 -- # uname -s 00:06:10.535 16:41:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:10.535 16:41:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:10.535 16:41:34 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:11.103 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:11.103 Hugepages 00:06:11.103 node hugesize free / total 00:06:11.362 node0 1048576kB 0 / 0 00:06:11.362 node0 2048kB 0 / 0 00:06:11.362 00:06:11.362 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:11.362 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:11.362 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:11.362 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:11.362 16:41:35 -- spdk/autotest.sh@117 -- # uname -s 00:06:11.362 16:41:35 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:11.362 16:41:35 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:11.362 16:41:35 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:12.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:12.297 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:12.297 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:12.297 16:41:35 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:13.234 16:41:36 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:13.234 16:41:36 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:13.234 16:41:36 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:13.234 16:41:36 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:13.234 16:41:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:13.234 16:41:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:13.234 16:41:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:13.234 16:41:36 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:13.234 16:41:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:13.493 16:41:37 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:13.493 16:41:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:13.493 16:41:37 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:13.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:13.752 Waiting for block devices as requested 00:06:13.752 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:14.012 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:14.012 16:41:37 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:14.012 16:41:37 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:14.012 16:41:37 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:14.012 16:41:37 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:14.012 16:41:37 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:14.012 16:41:37 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:14.012 16:41:37 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:14.012 16:41:37 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:14.012 16:41:37 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:14.012 16:41:37 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:14.012 16:41:37 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:14.012 16:41:37 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:14.012 16:41:37 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:14.012 16:41:37 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:14.012 16:41:37 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:14.012 16:41:37 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:14.012 16:41:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:14.012 16:41:37 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:14.012 16:41:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:14.012 16:41:37 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:14.012 16:41:37 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:14.012 16:41:37 -- common/autotest_common.sh@1543 -- # continue 00:06:14.012 16:41:37 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:14.012 16:41:37 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:14.012 16:41:37 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:14.012 16:41:37 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:14.012 16:41:37 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:14.012 16:41:37 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:14.012 16:41:37 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:14.012 16:41:37 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:14.012 16:41:37 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:14.012 16:41:37 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:14.012 16:41:37 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:14.012 16:41:37 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:14.012 16:41:37 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:14.012 16:41:37 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:14.012 16:41:37 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:14.012 16:41:37 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:14.012 16:41:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:14.012 16:41:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:14.012 16:41:37 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:14.012 16:41:37 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:14.012 16:41:37 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:14.012 16:41:37 -- common/autotest_common.sh@1543 -- # continue 00:06:14.012 16:41:37 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:14.012 16:41:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.012 16:41:37 -- common/autotest_common.sh@10 -- # set +x 00:06:14.012 16:41:37 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:14.012 16:41:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:14.012 16:41:37 -- common/autotest_common.sh@10 -- # set +x 00:06:14.012 16:41:37 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:14.580 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.841 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:14.841 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:14.841 16:41:38 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:14.841 16:41:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.841 16:41:38 -- common/autotest_common.sh@10 -- # set +x 00:06:14.841 16:41:38 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:14.841 16:41:38 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:15.100 16:41:38 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:15.100 16:41:38 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:15.100 16:41:38 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:15.100 16:41:38 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:15.100 16:41:38 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:15.100 16:41:38 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:15.100 16:41:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:15.100 16:41:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:15.100 16:41:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:15.100 16:41:38 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:15.100 16:41:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:15.100 16:41:38 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:15.100 16:41:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:15.100 16:41:38 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:15.100 16:41:38 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:15.100 16:41:38 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:15.100 16:41:38 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:15.100 16:41:38 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:15.100 16:41:38 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:15.100 16:41:38 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:15.100 16:41:38 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:15.100 16:41:38 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:15.100 16:41:38 -- common/autotest_common.sh@1572 -- # return 0 00:06:15.100 16:41:38 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:15.100 16:41:38 -- common/autotest_common.sh@1580 -- # return 0 00:06:15.100 16:41:38 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:15.100 16:41:38 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:15.100 16:41:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:15.100 16:41:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:15.100 16:41:38 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:15.100 16:41:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.100 16:41:38 -- common/autotest_common.sh@10 -- # set +x 00:06:15.100 16:41:38 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:15.100 16:41:38 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:15.100 16:41:38 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:15.100 16:41:38 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:15.100 16:41:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.100 16:41:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.100 16:41:38 -- common/autotest_common.sh@10 -- # set +x 00:06:15.100 ************************************ 00:06:15.100 START TEST env 00:06:15.100 ************************************ 00:06:15.100 16:41:38 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:15.100 * Looking for test storage... 00:06:15.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:15.100 16:41:38 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.100 16:41:38 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.100 16:41:38 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.359 16:41:38 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.359 16:41:38 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.360 16:41:38 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.360 16:41:38 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.360 16:41:38 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.360 16:41:38 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.360 16:41:38 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.360 16:41:38 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.360 16:41:38 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.360 16:41:38 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.360 16:41:38 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.360 16:41:38 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.360 16:41:38 env -- scripts/common.sh@344 -- # case "$op" in 00:06:15.360 16:41:38 env -- scripts/common.sh@345 -- # : 1 00:06:15.360 16:41:38 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.360 16:41:38 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.360 16:41:38 env -- scripts/common.sh@365 -- # decimal 1 00:06:15.360 16:41:38 env -- scripts/common.sh@353 -- # local d=1 00:06:15.360 16:41:38 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.360 16:41:38 env -- scripts/common.sh@355 -- # echo 1 00:06:15.360 16:41:38 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.360 16:41:38 env -- scripts/common.sh@366 -- # decimal 2 00:06:15.360 16:41:38 env -- scripts/common.sh@353 -- # local d=2 00:06:15.360 16:41:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.360 16:41:38 env -- scripts/common.sh@355 -- # echo 2 00:06:15.360 16:41:38 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.360 16:41:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.360 16:41:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.360 16:41:38 env -- scripts/common.sh@368 -- # return 0 00:06:15.360 16:41:38 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.360 16:41:38 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.360 --rc genhtml_branch_coverage=1 00:06:15.360 --rc genhtml_function_coverage=1 00:06:15.360 --rc genhtml_legend=1 00:06:15.360 --rc geninfo_all_blocks=1 00:06:15.360 --rc geninfo_unexecuted_blocks=1 00:06:15.360 00:06:15.360 ' 00:06:15.360 16:41:38 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.360 --rc genhtml_branch_coverage=1 00:06:15.360 --rc genhtml_function_coverage=1 00:06:15.360 --rc genhtml_legend=1 00:06:15.360 --rc geninfo_all_blocks=1 00:06:15.360 --rc geninfo_unexecuted_blocks=1 00:06:15.360 00:06:15.360 ' 00:06:15.360 16:41:38 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.360 --rc genhtml_branch_coverage=1 00:06:15.360 --rc genhtml_function_coverage=1 00:06:15.360 --rc genhtml_legend=1 00:06:15.360 --rc geninfo_all_blocks=1 00:06:15.360 --rc geninfo_unexecuted_blocks=1 00:06:15.360 00:06:15.360 ' 00:06:15.360 16:41:38 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.360 --rc genhtml_branch_coverage=1 00:06:15.360 --rc genhtml_function_coverage=1 00:06:15.360 --rc genhtml_legend=1 00:06:15.360 --rc geninfo_all_blocks=1 00:06:15.360 --rc geninfo_unexecuted_blocks=1 00:06:15.360 00:06:15.360 ' 00:06:15.360 16:41:38 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:15.360 16:41:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.360 16:41:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.360 16:41:38 env -- common/autotest_common.sh@10 -- # set +x 00:06:15.360 ************************************ 00:06:15.360 START TEST env_memory 00:06:15.360 ************************************ 00:06:15.360 16:41:38 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:15.360 00:06:15.360 00:06:15.360 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.360 http://cunit.sourceforge.net/ 00:06:15.360 00:06:15.360 00:06:15.360 Suite: memory 00:06:15.360 Test: alloc and free memory map ...[2024-11-29 16:41:38.954300] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:15.360 passed 00:06:15.360 Test: mem map translation ...[2024-11-29 16:41:38.977868] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:15.360 [2024-11-29 16:41:38.977916] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:15.360 [2024-11-29 16:41:38.977972] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:15.360 [2024-11-29 16:41:38.977980] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:15.360 passed 00:06:15.360 Test: mem map registration ...[2024-11-29 16:41:39.021337] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:15.360 [2024-11-29 16:41:39.021387] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:15.360 passed 00:06:15.360 Test: mem map adjacent registrations ...passed 00:06:15.360 00:06:15.360 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.360 suites 1 1 n/a 0 0 00:06:15.360 tests 4 4 4 0 0 00:06:15.360 asserts 152 152 152 0 n/a 00:06:15.360 00:06:15.360 Elapsed time = 0.147 seconds 00:06:15.360 00:06:15.360 real 0m0.161s 00:06:15.360 user 0m0.146s 00:06:15.360 sys 0m0.010s 00:06:15.360 16:41:39 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.360 16:41:39 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:15.360 ************************************ 00:06:15.360 END TEST env_memory 00:06:15.360 ************************************ 00:06:15.360 16:41:39 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:15.360 16:41:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.360 16:41:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.360 16:41:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:15.360 ************************************ 00:06:15.360 START TEST env_vtophys 00:06:15.360 ************************************ 00:06:15.360 16:41:39 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:15.620 EAL: lib.eal log level changed from notice to debug 00:06:15.620 EAL: Detected lcore 0 as core 0 on socket 0 00:06:15.620 EAL: Detected lcore 1 as core 0 on socket 0 00:06:15.620 EAL: Detected lcore 2 as core 0 on socket 0 00:06:15.620 EAL: Detected lcore 3 as core 0 on socket 0 00:06:15.620 EAL: Detected lcore 4 as core 0 on socket 0 00:06:15.620 EAL: Detected lcore 5 as core 0 on socket 0 00:06:15.620 EAL: Detected lcore 6 as core 0 on socket 0 00:06:15.620 EAL: Detected lcore 7 as core 0 on socket 0 00:06:15.620 EAL: Detected lcore 8 as core 0 on socket 0 00:06:15.620 EAL: Detected lcore 9 as core 0 on socket 0 00:06:15.620 EAL: Maximum logical cores by configuration: 128 00:06:15.620 EAL: Detected CPU lcores: 10 00:06:15.620 EAL: Detected NUMA nodes: 1 00:06:15.620 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:06:15.620 EAL: Detected shared linkage of DPDK 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:06:15.620 EAL: Registered [vdev] bus. 00:06:15.620 EAL: bus.vdev log level changed from disabled to notice 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:06:15.620 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:15.620 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25.0 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25.0 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:06:15.620 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:06:15.620 EAL: No shared files mode enabled, IPC will be disabled 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: Selected IOVA mode 'PA' 00:06:15.620 EAL: Probing VFIO support... 00:06:15.620 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:15.620 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:15.620 EAL: Ask a virtual area of 0x2e000 bytes 00:06:15.620 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:15.620 EAL: Setting up physically contiguous memory... 00:06:15.620 EAL: Setting maximum number of open files to 524288 00:06:15.620 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:15.620 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:15.620 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.620 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:15.620 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.620 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.620 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:15.620 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:15.620 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.620 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:15.620 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.620 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.620 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:15.620 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:15.620 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.620 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:15.620 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.620 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.620 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:15.620 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:15.620 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.620 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:15.620 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.620 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.620 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:15.620 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:15.620 EAL: Hugepages will be freed exactly as allocated. 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: TSC frequency is ~2200000 KHz 00:06:15.620 EAL: Main lcore 0 is ready (tid=7f215b27ca00;cpuset=[0]) 00:06:15.620 EAL: Trying to obtain current memory policy. 00:06:15.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.620 EAL: Restoring previous memory policy: 0 00:06:15.620 EAL: request: mp_malloc_sync 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: Heap on socket 0 was expanded by 2MB 00:06:15.620 EAL: Allocated 2112 bytes of per-lcore data with a 64-byte alignment 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: Mem event callback 'spdk:(nil)' registered 00:06:15.620 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:15.620 00:06:15.620 00:06:15.620 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.620 http://cunit.sourceforge.net/ 00:06:15.620 00:06:15.620 00:06:15.620 Suite: components_suite 00:06:15.620 Test: vtophys_malloc_test ...passed 00:06:15.620 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:15.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.620 EAL: Restoring previous memory policy: 4 00:06:15.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.620 EAL: request: mp_malloc_sync 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: Heap on socket 0 was expanded by 4MB 00:06:15.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.620 EAL: request: mp_malloc_sync 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: Heap on socket 0 was shrunk by 4MB 00:06:15.620 EAL: Trying to obtain current memory policy. 00:06:15.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.620 EAL: Restoring previous memory policy: 4 00:06:15.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.620 EAL: request: mp_malloc_sync 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: Heap on socket 0 was expanded by 6MB 00:06:15.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.620 EAL: request: mp_malloc_sync 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: Heap on socket 0 was shrunk by 6MB 00:06:15.620 EAL: Trying to obtain current memory policy. 00:06:15.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.620 EAL: Restoring previous memory policy: 4 00:06:15.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.620 EAL: request: mp_malloc_sync 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: Heap on socket 0 was expanded by 10MB 00:06:15.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.620 EAL: request: mp_malloc_sync 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: Heap on socket 0 was shrunk by 10MB 00:06:15.620 EAL: Trying to obtain current memory policy. 00:06:15.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.620 EAL: Restoring previous memory policy: 4 00:06:15.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.620 EAL: request: mp_malloc_sync 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: Heap on socket 0 was expanded by 18MB 00:06:15.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.620 EAL: request: mp_malloc_sync 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: Heap on socket 0 was shrunk by 18MB 00:06:15.620 EAL: Trying to obtain current memory policy. 00:06:15.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.620 EAL: Restoring previous memory policy: 4 00:06:15.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.620 EAL: request: mp_malloc_sync 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: Heap on socket 0 was expanded by 34MB 00:06:15.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.620 EAL: request: mp_malloc_sync 00:06:15.620 EAL: No shared files mode enabled, IPC is disabled 00:06:15.620 EAL: Heap on socket 0 was shrunk by 34MB 00:06:15.621 EAL: Trying to obtain current memory policy. 00:06:15.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.621 EAL: Restoring previous memory policy: 4 00:06:15.621 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.621 EAL: request: mp_malloc_sync 00:06:15.621 EAL: No shared files mode enabled, IPC is disabled 00:06:15.621 EAL: Heap on socket 0 was expanded by 66MB 00:06:15.621 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.621 EAL: request: mp_malloc_sync 00:06:15.621 EAL: No shared files mode enabled, IPC is disabled 00:06:15.621 EAL: Heap on socket 0 was shrunk by 66MB 00:06:15.621 EAL: Trying to obtain current memory policy. 00:06:15.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.621 EAL: Restoring previous memory policy: 4 00:06:15.621 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.621 EAL: request: mp_malloc_sync 00:06:15.621 EAL: No shared files mode enabled, IPC is disabled 00:06:15.621 EAL: Heap on socket 0 was expanded by 130MB 00:06:15.621 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.621 EAL: request: mp_malloc_sync 00:06:15.621 EAL: No shared files mode enabled, IPC is disabled 00:06:15.621 EAL: Heap on socket 0 was shrunk by 130MB 00:06:15.621 EAL: Trying to obtain current memory policy. 00:06:15.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.880 EAL: Restoring previous memory policy: 4 00:06:15.880 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.880 EAL: request: mp_malloc_sync 00:06:15.880 EAL: No shared files mode enabled, IPC is disabled 00:06:15.880 EAL: Heap on socket 0 was expanded by 258MB 00:06:15.880 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.880 EAL: request: mp_malloc_sync 00:06:15.880 EAL: No shared files mode enabled, IPC is disabled 00:06:15.880 EAL: Heap on socket 0 was shrunk by 258MB 00:06:15.880 EAL: Trying to obtain current memory policy. 00:06:15.880 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.880 EAL: Restoring previous memory policy: 4 00:06:15.880 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.880 EAL: request: mp_malloc_sync 00:06:15.880 EAL: No shared files mode enabled, IPC is disabled 00:06:15.880 EAL: Heap on socket 0 was expanded by 514MB 00:06:15.880 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.880 EAL: request: mp_malloc_sync 00:06:15.880 EAL: No shared files mode enabled, IPC is disabled 00:06:15.880 EAL: Heap on socket 0 was shrunk by 514MB 00:06:15.880 EAL: Trying to obtain current memory policy. 00:06:15.880 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.139 EAL: Restoring previous memory policy: 4 00:06:16.139 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.139 EAL: request: mp_malloc_sync 00:06:16.139 EAL: No shared files mode enabled, IPC is disabled 00:06:16.139 EAL: Heap on socket 0 was expanded by 1026MB 00:06:16.139 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.398 passed 00:06:16.398 00:06:16.398 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.398 suites 1 1 n/a 0 0 00:06:16.398 tests 2 2 2 0 0 00:06:16.398 asserts 5722 5722 5722 0 n/a 00:06:16.398 00:06:16.398 Elapsed time = 0.635 seconds 00:06:16.398 EAL: request: mp_malloc_sync 00:06:16.398 EAL: No shared files mode enabled, IPC is disabled 00:06:16.398 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:16.398 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.398 EAL: request: mp_malloc_sync 00:06:16.398 EAL: No shared files mode enabled, IPC is disabled 00:06:16.398 EAL: Heap on socket 0 was shrunk by 2MB 00:06:16.398 EAL: No shared files mode enabled, IPC is disabled 00:06:16.398 EAL: No shared files mode enabled, IPC is disabled 00:06:16.398 EAL: No shared files mode enabled, IPC is disabled 00:06:16.398 00:06:16.398 real 0m0.844s 00:06:16.398 user 0m0.437s 00:06:16.398 sys 0m0.278s 00:06:16.398 16:41:39 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.398 ************************************ 00:06:16.398 END TEST env_vtophys 00:06:16.398 ************************************ 00:06:16.398 16:41:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:16.398 16:41:40 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:16.398 16:41:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.398 16:41:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.398 16:41:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.398 ************************************ 00:06:16.398 START TEST env_pci 00:06:16.398 ************************************ 00:06:16.398 16:41:40 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:16.398 00:06:16.398 00:06:16.398 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.398 http://cunit.sourceforge.net/ 00:06:16.398 00:06:16.398 00:06:16.398 Suite: pci 00:06:16.398 Test: pci_hook ...[2024-11-29 16:41:40.043978] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70749 has claimed it 00:06:16.398 passed 00:06:16.398 00:06:16.398 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.398 suites 1 1 n/a 0 0 00:06:16.398 tests 1 1 1 0 0 00:06:16.398 asserts 25 25 25 0 n/a 00:06:16.398 00:06:16.398 Elapsed time = 0.002 seconds 00:06:16.398 EAL: Cannot find device (10000:00:01.0) 00:06:16.398 EAL: Failed to attach device on primary process 00:06:16.398 00:06:16.398 real 0m0.021s 00:06:16.398 user 0m0.008s 00:06:16.398 sys 0m0.012s 00:06:16.398 16:41:40 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.398 16:41:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:16.398 ************************************ 00:06:16.398 END TEST env_pci 00:06:16.398 ************************************ 00:06:16.398 16:41:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:16.398 16:41:40 env -- env/env.sh@15 -- # uname 00:06:16.398 16:41:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:16.398 16:41:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:16.398 16:41:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:16.398 16:41:40 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:16.398 16:41:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.399 16:41:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.399 ************************************ 00:06:16.399 START TEST env_dpdk_post_init 00:06:16.399 ************************************ 00:06:16.399 16:41:40 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:16.399 EAL: Detected CPU lcores: 10 00:06:16.399 EAL: Detected NUMA nodes: 1 00:06:16.399 EAL: Detected shared linkage of DPDK 00:06:16.399 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:16.399 EAL: Selected IOVA mode 'PA' 00:06:16.657 Starting DPDK initialization... 00:06:16.657 Starting SPDK post initialization... 00:06:16.657 SPDK NVMe probe 00:06:16.657 Attaching to 0000:00:10.0 00:06:16.657 Attaching to 0000:00:11.0 00:06:16.657 Attached to 0000:00:10.0 00:06:16.657 Attached to 0000:00:11.0 00:06:16.657 Cleaning up... 00:06:16.657 00:06:16.657 real 0m0.202s 00:06:16.657 user 0m0.061s 00:06:16.657 sys 0m0.041s 00:06:16.657 16:41:40 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.657 ************************************ 00:06:16.657 END TEST env_dpdk_post_init 00:06:16.657 ************************************ 00:06:16.657 16:41:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:16.657 16:41:40 env -- env/env.sh@26 -- # uname 00:06:16.657 16:41:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:16.657 16:41:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:16.657 16:41:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.657 16:41:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.657 16:41:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.657 ************************************ 00:06:16.657 START TEST env_mem_callbacks 00:06:16.657 ************************************ 00:06:16.657 16:41:40 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:16.658 EAL: Detected CPU lcores: 10 00:06:16.658 EAL: Detected NUMA nodes: 1 00:06:16.658 EAL: Detected shared linkage of DPDK 00:06:16.658 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:16.658 EAL: Selected IOVA mode 'PA' 00:06:16.917 00:06:16.917 00:06:16.917 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.917 http://cunit.sourceforge.net/ 00:06:16.917 00:06:16.917 00:06:16.917 Suite: memory 00:06:16.917 Test: test ... 00:06:16.917 register 0x200000200000 2097152 00:06:16.917 malloc 3145728 00:06:16.917 register 0x200000400000 4194304 00:06:16.917 buf 0x200000500000 len 3145728 PASSED 00:06:16.917 malloc 64 00:06:16.917 buf 0x2000004fff40 len 64 PASSED 00:06:16.917 malloc 4194304 00:06:16.917 register 0x200000800000 6291456 00:06:16.917 buf 0x200000a00000 len 4194304 PASSED 00:06:16.917 free 0x200000500000 3145728 00:06:16.917 free 0x2000004fff40 64 00:06:16.917 unregister 0x200000400000 4194304 PASSED 00:06:16.917 free 0x200000a00000 4194304 00:06:16.917 unregister 0x200000800000 6291456 PASSED 00:06:16.917 malloc 8388608 00:06:16.917 register 0x200000400000 10485760 00:06:16.917 buf 0x200000600000 len 8388608 PASSED 00:06:16.917 free 0x200000600000 8388608 00:06:16.917 unregister 0x200000400000 10485760 PASSED 00:06:16.917 passed 00:06:16.917 00:06:16.917 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.917 suites 1 1 n/a 0 0 00:06:16.917 tests 1 1 1 0 0 00:06:16.917 asserts 15 15 15 0 n/a 00:06:16.917 00:06:16.917 Elapsed time = 0.008 seconds 00:06:16.917 00:06:16.917 real 0m0.143s 00:06:16.917 user 0m0.019s 00:06:16.917 sys 0m0.023s 00:06:16.917 16:41:40 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.917 16:41:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:16.917 ************************************ 00:06:16.917 END TEST env_mem_callbacks 00:06:16.917 ************************************ 00:06:16.917 00:06:16.917 real 0m1.827s 00:06:16.917 user 0m0.874s 00:06:16.917 sys 0m0.606s 00:06:16.917 16:41:40 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.917 16:41:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.917 ************************************ 00:06:16.917 END TEST env 00:06:16.917 ************************************ 00:06:16.917 16:41:40 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:16.917 16:41:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.917 16:41:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.917 16:41:40 -- common/autotest_common.sh@10 -- # set +x 00:06:16.917 ************************************ 00:06:16.917 START TEST rpc 00:06:16.917 ************************************ 00:06:16.917 16:41:40 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:16.917 * Looking for test storage... 00:06:16.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:16.917 16:41:40 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.917 16:41:40 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.917 16:41:40 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.176 16:41:40 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.176 16:41:40 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.176 16:41:40 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.176 16:41:40 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.176 16:41:40 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.176 16:41:40 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.177 16:41:40 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.177 16:41:40 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.177 16:41:40 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.177 16:41:40 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.177 16:41:40 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.177 16:41:40 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.177 16:41:40 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:17.177 16:41:40 rpc -- scripts/common.sh@345 -- # : 1 00:06:17.177 16:41:40 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.177 16:41:40 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.177 16:41:40 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:17.177 16:41:40 rpc -- scripts/common.sh@353 -- # local d=1 00:06:17.177 16:41:40 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.177 16:41:40 rpc -- scripts/common.sh@355 -- # echo 1 00:06:17.177 16:41:40 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.177 16:41:40 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:17.177 16:41:40 rpc -- scripts/common.sh@353 -- # local d=2 00:06:17.177 16:41:40 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.177 16:41:40 rpc -- scripts/common.sh@355 -- # echo 2 00:06:17.177 16:41:40 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.177 16:41:40 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.177 16:41:40 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.177 16:41:40 rpc -- scripts/common.sh@368 -- # return 0 00:06:17.177 16:41:40 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.177 16:41:40 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.177 --rc genhtml_branch_coverage=1 00:06:17.177 --rc genhtml_function_coverage=1 00:06:17.177 --rc genhtml_legend=1 00:06:17.177 --rc geninfo_all_blocks=1 00:06:17.177 --rc geninfo_unexecuted_blocks=1 00:06:17.177 00:06:17.177 ' 00:06:17.177 16:41:40 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.177 --rc genhtml_branch_coverage=1 00:06:17.177 --rc genhtml_function_coverage=1 00:06:17.177 --rc genhtml_legend=1 00:06:17.177 --rc geninfo_all_blocks=1 00:06:17.177 --rc geninfo_unexecuted_blocks=1 00:06:17.177 00:06:17.177 ' 00:06:17.177 16:41:40 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.177 --rc genhtml_branch_coverage=1 00:06:17.177 --rc genhtml_function_coverage=1 00:06:17.177 --rc genhtml_legend=1 00:06:17.177 --rc geninfo_all_blocks=1 00:06:17.177 --rc geninfo_unexecuted_blocks=1 00:06:17.177 00:06:17.177 ' 00:06:17.177 16:41:40 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.177 --rc genhtml_branch_coverage=1 00:06:17.177 --rc genhtml_function_coverage=1 00:06:17.177 --rc genhtml_legend=1 00:06:17.177 --rc geninfo_all_blocks=1 00:06:17.177 --rc geninfo_unexecuted_blocks=1 00:06:17.177 00:06:17.177 ' 00:06:17.177 16:41:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70872 00:06:17.177 16:41:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.177 16:41:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70872 00:06:17.177 16:41:40 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:17.177 16:41:40 rpc -- common/autotest_common.sh@835 -- # '[' -z 70872 ']' 00:06:17.177 16:41:40 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.177 16:41:40 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.177 16:41:40 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.177 16:41:40 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.177 16:41:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.177 [2024-11-29 16:41:40.862787] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:17.177 [2024-11-29 16:41:40.862898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70872 ] 00:06:17.437 [2024-11-29 16:41:40.989559] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.437 [2024-11-29 16:41:41.023255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.437 [2024-11-29 16:41:41.046693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:17.437 [2024-11-29 16:41:41.046763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70872' to capture a snapshot of events at runtime. 00:06:17.437 [2024-11-29 16:41:41.046782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.437 [2024-11-29 16:41:41.046792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.437 [2024-11-29 16:41:41.046801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70872 for offline analysis/debug. 00:06:17.437 [2024-11-29 16:41:41.047194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.437 [2024-11-29 16:41:41.088129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.437 16:41:41 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.437 16:41:41 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.437 16:41:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:17.437 16:41:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:17.437 16:41:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:17.437 16:41:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:17.437 16:41:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.437 16:41:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.437 16:41:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.437 ************************************ 00:06:17.437 START TEST rpc_integrity 00:06:17.437 ************************************ 00:06:17.437 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:17.697 { 00:06:17.697 "name": "Malloc0", 00:06:17.697 "aliases": [ 00:06:17.697 "6fa3a9d1-a08b-49c1-a617-76aa68a1f420" 00:06:17.697 ], 00:06:17.697 "product_name": "Malloc disk", 00:06:17.697 "block_size": 512, 00:06:17.697 "num_blocks": 16384, 00:06:17.697 "uuid": "6fa3a9d1-a08b-49c1-a617-76aa68a1f420", 00:06:17.697 "assigned_rate_limits": { 00:06:17.697 "rw_ios_per_sec": 0, 00:06:17.697 "rw_mbytes_per_sec": 0, 00:06:17.697 "r_mbytes_per_sec": 0, 00:06:17.697 "w_mbytes_per_sec": 0 00:06:17.697 }, 00:06:17.697 "claimed": false, 00:06:17.697 "zoned": false, 00:06:17.697 "supported_io_types": { 00:06:17.697 "read": true, 00:06:17.697 "write": true, 00:06:17.697 "unmap": true, 00:06:17.697 "flush": true, 00:06:17.697 "reset": true, 00:06:17.697 "nvme_admin": false, 00:06:17.697 "nvme_io": false, 00:06:17.697 "nvme_io_md": false, 00:06:17.697 "write_zeroes": true, 00:06:17.697 "zcopy": true, 00:06:17.697 "get_zone_info": false, 00:06:17.697 "zone_management": false, 00:06:17.697 "zone_append": false, 00:06:17.697 "compare": false, 00:06:17.697 "compare_and_write": false, 00:06:17.697 "abort": true, 00:06:17.697 "seek_hole": false, 00:06:17.697 "seek_data": false, 00:06:17.697 "copy": true, 00:06:17.697 "nvme_iov_md": false 00:06:17.697 }, 00:06:17.697 "memory_domains": [ 00:06:17.697 { 00:06:17.697 "dma_device_id": "system", 00:06:17.697 "dma_device_type": 1 00:06:17.697 }, 00:06:17.697 { 00:06:17.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.697 "dma_device_type": 2 00:06:17.697 } 00:06:17.697 ], 00:06:17.697 "driver_specific": {} 00:06:17.697 } 00:06:17.697 ]' 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.697 [2024-11-29 16:41:41.376414] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:17.697 [2024-11-29 16:41:41.376474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:17.697 [2024-11-29 16:41:41.376491] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfbb030 00:06:17.697 [2024-11-29 16:41:41.376500] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:17.697 [2024-11-29 16:41:41.377941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:17.697 [2024-11-29 16:41:41.377987] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:17.697 Passthru0 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:17.697 { 00:06:17.697 "name": "Malloc0", 00:06:17.697 "aliases": [ 00:06:17.697 "6fa3a9d1-a08b-49c1-a617-76aa68a1f420" 00:06:17.697 ], 00:06:17.697 "product_name": "Malloc disk", 00:06:17.697 "block_size": 512, 00:06:17.697 "num_blocks": 16384, 00:06:17.697 "uuid": "6fa3a9d1-a08b-49c1-a617-76aa68a1f420", 00:06:17.697 "assigned_rate_limits": { 00:06:17.697 "rw_ios_per_sec": 0, 00:06:17.697 "rw_mbytes_per_sec": 0, 00:06:17.697 "r_mbytes_per_sec": 0, 00:06:17.697 "w_mbytes_per_sec": 0 00:06:17.697 }, 00:06:17.697 "claimed": true, 00:06:17.697 "claim_type": "exclusive_write", 00:06:17.697 "zoned": false, 00:06:17.697 "supported_io_types": { 00:06:17.697 "read": true, 00:06:17.697 "write": true, 00:06:17.697 "unmap": true, 00:06:17.697 "flush": true, 00:06:17.697 "reset": true, 00:06:17.697 "nvme_admin": false, 00:06:17.697 "nvme_io": false, 00:06:17.697 "nvme_io_md": false, 00:06:17.697 "write_zeroes": true, 00:06:17.697 "zcopy": true, 00:06:17.697 "get_zone_info": false, 00:06:17.697 "zone_management": false, 00:06:17.697 "zone_append": false, 00:06:17.697 "compare": false, 00:06:17.697 "compare_and_write": false, 00:06:17.697 "abort": true, 00:06:17.697 "seek_hole": false, 00:06:17.697 "seek_data": false, 00:06:17.697 "copy": true, 00:06:17.697 "nvme_iov_md": false 00:06:17.697 }, 00:06:17.697 "memory_domains": [ 00:06:17.697 { 00:06:17.697 "dma_device_id": "system", 00:06:17.697 "dma_device_type": 1 00:06:17.697 }, 00:06:17.697 { 00:06:17.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.697 "dma_device_type": 2 00:06:17.697 } 00:06:17.697 ], 00:06:17.697 "driver_specific": {} 00:06:17.697 }, 00:06:17.697 { 00:06:17.697 "name": "Passthru0", 00:06:17.697 "aliases": [ 00:06:17.697 "7ea78bc3-de14-551d-940f-3cc5b72a5ef5" 00:06:17.697 ], 00:06:17.697 "product_name": "passthru", 00:06:17.697 "block_size": 512, 00:06:17.697 "num_blocks": 16384, 00:06:17.697 "uuid": "7ea78bc3-de14-551d-940f-3cc5b72a5ef5", 00:06:17.697 "assigned_rate_limits": { 00:06:17.697 "rw_ios_per_sec": 0, 00:06:17.697 "rw_mbytes_per_sec": 0, 00:06:17.697 "r_mbytes_per_sec": 0, 00:06:17.697 "w_mbytes_per_sec": 0 00:06:17.697 }, 00:06:17.697 "claimed": false, 00:06:17.697 "zoned": false, 00:06:17.697 "supported_io_types": { 00:06:17.697 "read": true, 00:06:17.697 "write": true, 00:06:17.697 "unmap": true, 00:06:17.697 "flush": true, 00:06:17.697 "reset": true, 00:06:17.697 "nvme_admin": false, 00:06:17.697 "nvme_io": false, 00:06:17.697 "nvme_io_md": false, 00:06:17.697 "write_zeroes": true, 00:06:17.697 "zcopy": true, 00:06:17.697 "get_zone_info": false, 00:06:17.697 "zone_management": false, 00:06:17.697 "zone_append": false, 00:06:17.697 "compare": false, 00:06:17.697 "compare_and_write": false, 00:06:17.697 "abort": true, 00:06:17.697 "seek_hole": false, 00:06:17.697 "seek_data": false, 00:06:17.697 "copy": true, 00:06:17.697 "nvme_iov_md": false 00:06:17.697 }, 00:06:17.697 "memory_domains": [ 00:06:17.697 { 00:06:17.697 "dma_device_id": "system", 00:06:17.697 "dma_device_type": 1 00:06:17.697 }, 00:06:17.697 { 00:06:17.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.697 "dma_device_type": 2 00:06:17.697 } 00:06:17.697 ], 00:06:17.697 "driver_specific": { 00:06:17.697 "passthru": { 00:06:17.697 "name": "Passthru0", 00:06:17.697 "base_bdev_name": "Malloc0" 00:06:17.697 } 00:06:17.697 } 00:06:17.697 } 00:06:17.697 ]' 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.697 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:17.697 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.698 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.698 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.698 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:17.698 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.698 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.957 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.957 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:17.957 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:17.957 16:41:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:17.957 00:06:17.957 real 0m0.334s 00:06:17.957 user 0m0.222s 00:06:17.957 sys 0m0.036s 00:06:17.957 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.957 16:41:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:17.957 ************************************ 00:06:17.957 END TEST rpc_integrity 00:06:17.957 ************************************ 00:06:17.957 16:41:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:17.957 16:41:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.957 16:41:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.957 16:41:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.957 ************************************ 00:06:17.957 START TEST rpc_plugins 00:06:17.957 ************************************ 00:06:17.957 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:17.957 16:41:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:17.957 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.957 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:17.957 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.957 16:41:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:17.957 16:41:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:17.957 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.957 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:17.957 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.957 16:41:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:17.958 { 00:06:17.958 "name": "Malloc1", 00:06:17.958 "aliases": [ 00:06:17.958 "a989ec61-a4ae-43b5-951b-5f22481ca724" 00:06:17.958 ], 00:06:17.958 "product_name": "Malloc disk", 00:06:17.958 "block_size": 4096, 00:06:17.958 "num_blocks": 256, 00:06:17.958 "uuid": "a989ec61-a4ae-43b5-951b-5f22481ca724", 00:06:17.958 "assigned_rate_limits": { 00:06:17.958 "rw_ios_per_sec": 0, 00:06:17.958 "rw_mbytes_per_sec": 0, 00:06:17.958 "r_mbytes_per_sec": 0, 00:06:17.958 "w_mbytes_per_sec": 0 00:06:17.958 }, 00:06:17.958 "claimed": false, 00:06:17.958 "zoned": false, 00:06:17.958 "supported_io_types": { 00:06:17.958 "read": true, 00:06:17.958 "write": true, 00:06:17.958 "unmap": true, 00:06:17.958 "flush": true, 00:06:17.958 "reset": true, 00:06:17.958 "nvme_admin": false, 00:06:17.958 "nvme_io": false, 00:06:17.958 "nvme_io_md": false, 00:06:17.958 "write_zeroes": true, 00:06:17.958 "zcopy": true, 00:06:17.958 "get_zone_info": false, 00:06:17.958 "zone_management": false, 00:06:17.958 "zone_append": false, 00:06:17.958 "compare": false, 00:06:17.958 "compare_and_write": false, 00:06:17.958 "abort": true, 00:06:17.958 "seek_hole": false, 00:06:17.958 "seek_data": false, 00:06:17.958 "copy": true, 00:06:17.958 "nvme_iov_md": false 00:06:17.958 }, 00:06:17.958 "memory_domains": [ 00:06:17.958 { 00:06:17.958 "dma_device_id": "system", 00:06:17.958 "dma_device_type": 1 00:06:17.958 }, 00:06:17.958 { 00:06:17.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:17.958 "dma_device_type": 2 00:06:17.958 } 00:06:17.958 ], 00:06:17.958 "driver_specific": {} 00:06:17.958 } 00:06:17.958 ]' 00:06:17.958 16:41:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:17.958 16:41:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:17.958 16:41:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:17.958 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.958 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:17.958 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.958 16:41:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:17.958 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.958 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:17.958 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.958 16:41:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:17.958 16:41:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:18.217 16:41:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:18.217 00:06:18.217 real 0m0.149s 00:06:18.217 user 0m0.106s 00:06:18.217 sys 0m0.014s 00:06:18.217 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.217 ************************************ 00:06:18.217 END TEST rpc_plugins 00:06:18.217 ************************************ 00:06:18.217 16:41:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 16:41:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:18.217 16:41:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.217 16:41:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.217 16:41:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 ************************************ 00:06:18.217 START TEST rpc_trace_cmd_test 00:06:18.217 ************************************ 00:06:18.217 16:41:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:18.217 16:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:18.217 16:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:18.217 16:41:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.217 16:41:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.217 16:41:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.217 16:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:18.217 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70872", 00:06:18.217 "tpoint_group_mask": "0x8", 00:06:18.217 "iscsi_conn": { 00:06:18.217 "mask": "0x2", 00:06:18.217 "tpoint_mask": "0x0" 00:06:18.217 }, 00:06:18.217 "scsi": { 00:06:18.217 "mask": "0x4", 00:06:18.217 "tpoint_mask": "0x0" 00:06:18.217 }, 00:06:18.217 "bdev": { 00:06:18.217 "mask": "0x8", 00:06:18.217 "tpoint_mask": "0xffffffffffffffff" 00:06:18.217 }, 00:06:18.217 "nvmf_rdma": { 00:06:18.217 "mask": "0x10", 00:06:18.217 "tpoint_mask": "0x0" 00:06:18.217 }, 00:06:18.217 "nvmf_tcp": { 00:06:18.217 "mask": "0x20", 00:06:18.217 "tpoint_mask": "0x0" 00:06:18.217 }, 00:06:18.217 "ftl": { 00:06:18.217 "mask": "0x40", 00:06:18.217 "tpoint_mask": "0x0" 00:06:18.217 }, 00:06:18.217 "blobfs": { 00:06:18.217 "mask": "0x80", 00:06:18.217 "tpoint_mask": "0x0" 00:06:18.217 }, 00:06:18.217 "dsa": { 00:06:18.217 "mask": "0x200", 00:06:18.217 "tpoint_mask": "0x0" 00:06:18.217 }, 00:06:18.217 "thread": { 00:06:18.217 "mask": "0x400", 00:06:18.217 "tpoint_mask": "0x0" 00:06:18.217 }, 00:06:18.217 "nvme_pcie": { 00:06:18.217 "mask": "0x800", 00:06:18.217 "tpoint_mask": "0x0" 00:06:18.217 }, 00:06:18.217 "iaa": { 00:06:18.217 "mask": "0x1000", 00:06:18.217 "tpoint_mask": "0x0" 00:06:18.217 }, 00:06:18.217 "nvme_tcp": { 00:06:18.217 "mask": "0x2000", 00:06:18.218 "tpoint_mask": "0x0" 00:06:18.218 }, 00:06:18.218 "bdev_nvme": { 00:06:18.218 "mask": "0x4000", 00:06:18.218 "tpoint_mask": "0x0" 00:06:18.218 }, 00:06:18.218 "sock": { 00:06:18.218 "mask": "0x8000", 00:06:18.218 "tpoint_mask": "0x0" 00:06:18.218 }, 00:06:18.218 "blob": { 00:06:18.218 "mask": "0x10000", 00:06:18.218 "tpoint_mask": "0x0" 00:06:18.218 }, 00:06:18.218 "bdev_raid": { 00:06:18.218 "mask": "0x20000", 00:06:18.218 "tpoint_mask": "0x0" 00:06:18.218 }, 00:06:18.218 "scheduler": { 00:06:18.218 "mask": "0x40000", 00:06:18.218 "tpoint_mask": "0x0" 00:06:18.218 } 00:06:18.218 }' 00:06:18.218 16:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:18.218 16:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:18.218 16:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:18.218 16:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:18.218 16:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:18.218 16:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:18.218 16:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:18.477 16:41:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:18.477 16:41:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:18.477 16:41:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:18.477 00:06:18.477 real 0m0.280s 00:06:18.477 user 0m0.238s 00:06:18.477 sys 0m0.029s 00:06:18.477 16:41:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.477 16:41:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 ************************************ 00:06:18.477 END TEST rpc_trace_cmd_test 00:06:18.477 ************************************ 00:06:18.477 16:41:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:18.477 16:41:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:18.477 16:41:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:18.477 16:41:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.477 16:41:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.477 16:41:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 ************************************ 00:06:18.477 START TEST rpc_daemon_integrity 00:06:18.477 ************************************ 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.477 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:18.477 { 00:06:18.477 "name": "Malloc2", 00:06:18.477 "aliases": [ 00:06:18.477 "749ba6fb-1984-4e8f-b71b-8d68f59e79c6" 00:06:18.477 ], 00:06:18.477 "product_name": "Malloc disk", 00:06:18.477 "block_size": 512, 00:06:18.477 "num_blocks": 16384, 00:06:18.477 "uuid": "749ba6fb-1984-4e8f-b71b-8d68f59e79c6", 00:06:18.477 "assigned_rate_limits": { 00:06:18.477 "rw_ios_per_sec": 0, 00:06:18.477 "rw_mbytes_per_sec": 0, 00:06:18.477 "r_mbytes_per_sec": 0, 00:06:18.477 "w_mbytes_per_sec": 0 00:06:18.477 }, 00:06:18.477 "claimed": false, 00:06:18.477 "zoned": false, 00:06:18.477 "supported_io_types": { 00:06:18.477 "read": true, 00:06:18.477 "write": true, 00:06:18.477 "unmap": true, 00:06:18.477 "flush": true, 00:06:18.477 "reset": true, 00:06:18.477 "nvme_admin": false, 00:06:18.477 "nvme_io": false, 00:06:18.477 "nvme_io_md": false, 00:06:18.477 "write_zeroes": true, 00:06:18.477 "zcopy": true, 00:06:18.477 "get_zone_info": false, 00:06:18.477 "zone_management": false, 00:06:18.477 "zone_append": false, 00:06:18.477 "compare": false, 00:06:18.477 "compare_and_write": false, 00:06:18.477 "abort": true, 00:06:18.477 "seek_hole": false, 00:06:18.477 "seek_data": false, 00:06:18.477 "copy": true, 00:06:18.477 "nvme_iov_md": false 00:06:18.477 }, 00:06:18.477 "memory_domains": [ 00:06:18.477 { 00:06:18.477 "dma_device_id": "system", 00:06:18.477 "dma_device_type": 1 00:06:18.477 }, 00:06:18.477 { 00:06:18.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.477 "dma_device_type": 2 00:06:18.477 } 00:06:18.477 ], 00:06:18.477 "driver_specific": {} 00:06:18.477 } 00:06:18.478 ]' 00:06:18.478 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:18.737 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:18.737 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:18.737 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.737 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.737 [2024-11-29 16:41:42.288839] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:18.737 [2024-11-29 16:41:42.288921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:18.737 [2024-11-29 16:41:42.288953] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfbc960 00:06:18.737 [2024-11-29 16:41:42.288978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:18.737 [2024-11-29 16:41:42.290413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:18.737 [2024-11-29 16:41:42.290475] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:18.737 Passthru0 00:06:18.737 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.737 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:18.737 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.737 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.737 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.737 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:18.737 { 00:06:18.737 "name": "Malloc2", 00:06:18.737 "aliases": [ 00:06:18.737 "749ba6fb-1984-4e8f-b71b-8d68f59e79c6" 00:06:18.737 ], 00:06:18.737 "product_name": "Malloc disk", 00:06:18.737 "block_size": 512, 00:06:18.737 "num_blocks": 16384, 00:06:18.737 "uuid": "749ba6fb-1984-4e8f-b71b-8d68f59e79c6", 00:06:18.737 "assigned_rate_limits": { 00:06:18.737 "rw_ios_per_sec": 0, 00:06:18.737 "rw_mbytes_per_sec": 0, 00:06:18.737 "r_mbytes_per_sec": 0, 00:06:18.737 "w_mbytes_per_sec": 0 00:06:18.737 }, 00:06:18.737 "claimed": true, 00:06:18.737 "claim_type": "exclusive_write", 00:06:18.737 "zoned": false, 00:06:18.737 "supported_io_types": { 00:06:18.737 "read": true, 00:06:18.737 "write": true, 00:06:18.737 "unmap": true, 00:06:18.737 "flush": true, 00:06:18.737 "reset": true, 00:06:18.737 "nvme_admin": false, 00:06:18.737 "nvme_io": false, 00:06:18.737 "nvme_io_md": false, 00:06:18.737 "write_zeroes": true, 00:06:18.737 "zcopy": true, 00:06:18.737 "get_zone_info": false, 00:06:18.737 "zone_management": false, 00:06:18.737 "zone_append": false, 00:06:18.737 "compare": false, 00:06:18.737 "compare_and_write": false, 00:06:18.737 "abort": true, 00:06:18.737 "seek_hole": false, 00:06:18.737 "seek_data": false, 00:06:18.737 "copy": true, 00:06:18.737 "nvme_iov_md": false 00:06:18.737 }, 00:06:18.737 "memory_domains": [ 00:06:18.737 { 00:06:18.737 "dma_device_id": "system", 00:06:18.737 "dma_device_type": 1 00:06:18.737 }, 00:06:18.737 { 00:06:18.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.737 "dma_device_type": 2 00:06:18.737 } 00:06:18.737 ], 00:06:18.737 "driver_specific": {} 00:06:18.737 }, 00:06:18.737 { 00:06:18.737 "name": "Passthru0", 00:06:18.737 "aliases": [ 00:06:18.737 "0717918e-002d-57c2-b35e-f23b5e1aedd9" 00:06:18.737 ], 00:06:18.737 "product_name": "passthru", 00:06:18.737 "block_size": 512, 00:06:18.737 "num_blocks": 16384, 00:06:18.737 "uuid": "0717918e-002d-57c2-b35e-f23b5e1aedd9", 00:06:18.737 "assigned_rate_limits": { 00:06:18.737 "rw_ios_per_sec": 0, 00:06:18.737 "rw_mbytes_per_sec": 0, 00:06:18.737 "r_mbytes_per_sec": 0, 00:06:18.737 "w_mbytes_per_sec": 0 00:06:18.737 }, 00:06:18.737 "claimed": false, 00:06:18.737 "zoned": false, 00:06:18.738 "supported_io_types": { 00:06:18.738 "read": true, 00:06:18.738 "write": true, 00:06:18.738 "unmap": true, 00:06:18.738 "flush": true, 00:06:18.738 "reset": true, 00:06:18.738 "nvme_admin": false, 00:06:18.738 "nvme_io": false, 00:06:18.738 "nvme_io_md": false, 00:06:18.738 "write_zeroes": true, 00:06:18.738 "zcopy": true, 00:06:18.738 "get_zone_info": false, 00:06:18.738 "zone_management": false, 00:06:18.738 "zone_append": false, 00:06:18.738 "compare": false, 00:06:18.738 "compare_and_write": false, 00:06:18.738 "abort": true, 00:06:18.738 "seek_hole": false, 00:06:18.738 "seek_data": false, 00:06:18.738 "copy": true, 00:06:18.738 "nvme_iov_md": false 00:06:18.738 }, 00:06:18.738 "memory_domains": [ 00:06:18.738 { 00:06:18.738 "dma_device_id": "system", 00:06:18.738 "dma_device_type": 1 00:06:18.738 }, 00:06:18.738 { 00:06:18.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.738 "dma_device_type": 2 00:06:18.738 } 00:06:18.738 ], 00:06:18.738 "driver_specific": { 00:06:18.738 "passthru": { 00:06:18.738 "name": "Passthru0", 00:06:18.738 "base_bdev_name": "Malloc2" 00:06:18.738 } 00:06:18.738 } 00:06:18.738 } 00:06:18.738 ]' 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:18.738 00:06:18.738 real 0m0.324s 00:06:18.738 user 0m0.221s 00:06:18.738 sys 0m0.042s 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.738 ************************************ 00:06:18.738 END TEST rpc_daemon_integrity 00:06:18.738 ************************************ 00:06:18.738 16:41:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:18.738 16:41:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:18.738 16:41:42 rpc -- rpc/rpc.sh@84 -- # killprocess 70872 00:06:18.738 16:41:42 rpc -- common/autotest_common.sh@954 -- # '[' -z 70872 ']' 00:06:18.738 16:41:42 rpc -- common/autotest_common.sh@958 -- # kill -0 70872 00:06:18.738 16:41:42 rpc -- common/autotest_common.sh@959 -- # uname 00:06:18.738 16:41:42 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.738 16:41:42 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70872 00:06:18.998 16:41:42 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.998 killing process with pid 70872 00:06:18.998 16:41:42 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.998 16:41:42 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70872' 00:06:18.998 16:41:42 rpc -- common/autotest_common.sh@973 -- # kill 70872 00:06:18.998 16:41:42 rpc -- common/autotest_common.sh@978 -- # wait 70872 00:06:18.998 00:06:18.998 real 0m2.164s 00:06:18.998 user 0m2.911s 00:06:18.998 sys 0m0.563s 00:06:18.998 16:41:42 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.998 ************************************ 00:06:18.998 END TEST rpc 00:06:18.998 ************************************ 00:06:18.998 16:41:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.258 16:41:42 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:19.258 16:41:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.258 16:41:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.258 16:41:42 -- common/autotest_common.sh@10 -- # set +x 00:06:19.258 ************************************ 00:06:19.258 START TEST skip_rpc 00:06:19.258 ************************************ 00:06:19.258 16:41:42 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:19.258 * Looking for test storage... 00:06:19.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:19.258 16:41:42 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:19.258 16:41:42 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:19.258 16:41:42 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:19.258 16:41:42 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.258 16:41:42 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.259 16:41:42 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:19.259 16:41:42 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:19.259 16:41:42 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.259 16:41:42 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:19.259 16:41:42 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.259 16:41:42 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:19.259 16:41:42 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:19.259 16:41:42 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.259 16:41:42 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:19.259 16:41:42 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.259 16:41:42 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.259 16:41:42 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.259 16:41:42 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:19.259 16:41:42 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.259 16:41:42 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:19.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.259 --rc genhtml_branch_coverage=1 00:06:19.259 --rc genhtml_function_coverage=1 00:06:19.259 --rc genhtml_legend=1 00:06:19.259 --rc geninfo_all_blocks=1 00:06:19.259 --rc geninfo_unexecuted_blocks=1 00:06:19.259 00:06:19.259 ' 00:06:19.259 16:41:43 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:19.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.259 --rc genhtml_branch_coverage=1 00:06:19.259 --rc genhtml_function_coverage=1 00:06:19.259 --rc genhtml_legend=1 00:06:19.259 --rc geninfo_all_blocks=1 00:06:19.259 --rc geninfo_unexecuted_blocks=1 00:06:19.259 00:06:19.259 ' 00:06:19.259 16:41:43 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:19.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.259 --rc genhtml_branch_coverage=1 00:06:19.259 --rc genhtml_function_coverage=1 00:06:19.259 --rc genhtml_legend=1 00:06:19.259 --rc geninfo_all_blocks=1 00:06:19.259 --rc geninfo_unexecuted_blocks=1 00:06:19.259 00:06:19.259 ' 00:06:19.259 16:41:43 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:19.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.259 --rc genhtml_branch_coverage=1 00:06:19.259 --rc genhtml_function_coverage=1 00:06:19.259 --rc genhtml_legend=1 00:06:19.259 --rc geninfo_all_blocks=1 00:06:19.259 --rc geninfo_unexecuted_blocks=1 00:06:19.259 00:06:19.259 ' 00:06:19.259 16:41:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:19.259 16:41:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:19.259 16:41:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:19.259 16:41:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.259 16:41:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.259 16:41:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.259 ************************************ 00:06:19.259 START TEST skip_rpc 00:06:19.259 ************************************ 00:06:19.259 16:41:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:19.259 16:41:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71065 00:06:19.259 16:41:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.259 16:41:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:19.259 16:41:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:19.518 [2024-11-29 16:41:43.083530] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:19.518 [2024-11-29 16:41:43.083629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71065 ] 00:06:19.518 [2024-11-29 16:41:43.209114] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.518 [2024-11-29 16:41:43.239632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.518 [2024-11-29 16:41:43.259902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.518 [2024-11-29 16:41:43.293816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71065 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 71065 ']' 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 71065 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71065 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.787 killing process with pid 71065 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71065' 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 71065 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 71065 00:06:24.787 00:06:24.787 real 0m5.250s 00:06:24.787 user 0m4.979s 00:06:24.787 sys 0m0.190s 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.787 16:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.787 ************************************ 00:06:24.787 END TEST skip_rpc 00:06:24.787 ************************************ 00:06:24.787 16:41:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:24.787 16:41:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.787 16:41:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.787 16:41:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.787 ************************************ 00:06:24.787 START TEST skip_rpc_with_json 00:06:24.787 ************************************ 00:06:24.787 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:24.787 16:41:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:24.787 16:41:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71146 00:06:24.787 16:41:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.787 16:41:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71146 00:06:24.787 16:41:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.787 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71146 ']' 00:06:24.787 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.787 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.788 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.788 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.788 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:24.788 [2024-11-29 16:41:48.388028] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:24.788 [2024-11-29 16:41:48.388129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71146 ] 00:06:24.788 [2024-11-29 16:41:48.513725] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:24.788 [2024-11-29 16:41:48.540047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.788 [2024-11-29 16:41:48.559126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.047 [2024-11-29 16:41:48.595116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.047 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.047 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:25.047 16:41:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:25.047 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.047 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.047 [2024-11-29 16:41:48.710042] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:25.047 request: 00:06:25.047 { 00:06:25.047 "trtype": "tcp", 00:06:25.047 "method": "nvmf_get_transports", 00:06:25.047 "req_id": 1 00:06:25.047 } 00:06:25.047 Got JSON-RPC error response 00:06:25.047 response: 00:06:25.047 { 00:06:25.047 "code": -19, 00:06:25.047 "message": "No such device" 00:06:25.047 } 00:06:25.047 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:25.047 16:41:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:25.047 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.047 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.047 [2024-11-29 16:41:48.722144] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.047 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.047 16:41:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:25.047 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.047 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.306 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.306 16:41:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:25.306 { 00:06:25.306 "subsystems": [ 00:06:25.306 { 00:06:25.306 "subsystem": "fsdev", 00:06:25.306 "config": [ 00:06:25.306 { 00:06:25.306 "method": "fsdev_set_opts", 00:06:25.306 "params": { 00:06:25.306 "fsdev_io_pool_size": 65535, 00:06:25.306 "fsdev_io_cache_size": 256 00:06:25.306 } 00:06:25.306 } 00:06:25.306 ] 00:06:25.306 }, 00:06:25.306 { 00:06:25.306 "subsystem": "keyring", 00:06:25.306 "config": [] 00:06:25.306 }, 00:06:25.306 { 00:06:25.306 "subsystem": "iobuf", 00:06:25.306 "config": [ 00:06:25.306 { 00:06:25.306 "method": "iobuf_set_options", 00:06:25.306 "params": { 00:06:25.306 "small_pool_count": 8192, 00:06:25.306 "large_pool_count": 1024, 00:06:25.306 "small_bufsize": 8192, 00:06:25.306 "large_bufsize": 135168, 00:06:25.306 "enable_numa": false 00:06:25.306 } 00:06:25.306 } 00:06:25.306 ] 00:06:25.306 }, 00:06:25.306 { 00:06:25.306 "subsystem": "sock", 00:06:25.306 "config": [ 00:06:25.306 { 00:06:25.306 "method": "sock_set_default_impl", 00:06:25.306 "params": { 00:06:25.306 "impl_name": "uring" 00:06:25.306 } 00:06:25.306 }, 00:06:25.306 { 00:06:25.306 "method": "sock_impl_set_options", 00:06:25.306 "params": { 00:06:25.306 "impl_name": "ssl", 00:06:25.306 "recv_buf_size": 4096, 00:06:25.307 "send_buf_size": 4096, 00:06:25.307 "enable_recv_pipe": true, 00:06:25.307 "enable_quickack": false, 00:06:25.307 "enable_placement_id": 0, 00:06:25.307 "enable_zerocopy_send_server": true, 00:06:25.307 "enable_zerocopy_send_client": false, 00:06:25.307 "zerocopy_threshold": 0, 00:06:25.307 "tls_version": 0, 00:06:25.307 "enable_ktls": false 00:06:25.307 } 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "method": "sock_impl_set_options", 00:06:25.307 "params": { 00:06:25.307 "impl_name": "posix", 00:06:25.307 "recv_buf_size": 2097152, 00:06:25.307 "send_buf_size": 2097152, 00:06:25.307 "enable_recv_pipe": true, 00:06:25.307 "enable_quickack": false, 00:06:25.307 "enable_placement_id": 0, 00:06:25.307 "enable_zerocopy_send_server": true, 00:06:25.307 "enable_zerocopy_send_client": false, 00:06:25.307 "zerocopy_threshold": 0, 00:06:25.307 "tls_version": 0, 00:06:25.307 "enable_ktls": false 00:06:25.307 } 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "method": "sock_impl_set_options", 00:06:25.307 "params": { 00:06:25.307 "impl_name": "uring", 00:06:25.307 "recv_buf_size": 2097152, 00:06:25.307 "send_buf_size": 2097152, 00:06:25.307 "enable_recv_pipe": true, 00:06:25.307 "enable_quickack": false, 00:06:25.307 "enable_placement_id": 0, 00:06:25.307 "enable_zerocopy_send_server": false, 00:06:25.307 "enable_zerocopy_send_client": false, 00:06:25.307 "zerocopy_threshold": 0, 00:06:25.307 "tls_version": 0, 00:06:25.307 "enable_ktls": false 00:06:25.307 } 00:06:25.307 } 00:06:25.307 ] 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "subsystem": "vmd", 00:06:25.307 "config": [] 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "subsystem": "accel", 00:06:25.307 "config": [ 00:06:25.307 { 00:06:25.307 "method": "accel_set_options", 00:06:25.307 "params": { 00:06:25.307 "small_cache_size": 128, 00:06:25.307 "large_cache_size": 16, 00:06:25.307 "task_count": 2048, 00:06:25.307 "sequence_count": 2048, 00:06:25.307 "buf_count": 2048 00:06:25.307 } 00:06:25.307 } 00:06:25.307 ] 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "subsystem": "bdev", 00:06:25.307 "config": [ 00:06:25.307 { 00:06:25.307 "method": "bdev_set_options", 00:06:25.307 "params": { 00:06:25.307 "bdev_io_pool_size": 65535, 00:06:25.307 "bdev_io_cache_size": 256, 00:06:25.307 "bdev_auto_examine": true, 00:06:25.307 "iobuf_small_cache_size": 128, 00:06:25.307 "iobuf_large_cache_size": 16 00:06:25.307 } 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "method": "bdev_raid_set_options", 00:06:25.307 "params": { 00:06:25.307 "process_window_size_kb": 1024, 00:06:25.307 "process_max_bandwidth_mb_sec": 0 00:06:25.307 } 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "method": "bdev_iscsi_set_options", 00:06:25.307 "params": { 00:06:25.307 "timeout_sec": 30 00:06:25.307 } 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "method": "bdev_nvme_set_options", 00:06:25.307 "params": { 00:06:25.307 "action_on_timeout": "none", 00:06:25.307 "timeout_us": 0, 00:06:25.307 "timeout_admin_us": 0, 00:06:25.307 "keep_alive_timeout_ms": 10000, 00:06:25.307 "arbitration_burst": 0, 00:06:25.307 "low_priority_weight": 0, 00:06:25.307 "medium_priority_weight": 0, 00:06:25.307 "high_priority_weight": 0, 00:06:25.307 "nvme_adminq_poll_period_us": 10000, 00:06:25.307 "nvme_ioq_poll_period_us": 0, 00:06:25.307 "io_queue_requests": 0, 00:06:25.307 "delay_cmd_submit": true, 00:06:25.307 "transport_retry_count": 4, 00:06:25.307 "bdev_retry_count": 3, 00:06:25.307 "transport_ack_timeout": 0, 00:06:25.307 "ctrlr_loss_timeout_sec": 0, 00:06:25.307 "reconnect_delay_sec": 0, 00:06:25.307 "fast_io_fail_timeout_sec": 0, 00:06:25.307 "disable_auto_failback": false, 00:06:25.307 "generate_uuids": false, 00:06:25.307 "transport_tos": 0, 00:06:25.307 "nvme_error_stat": false, 00:06:25.307 "rdma_srq_size": 0, 00:06:25.307 "io_path_stat": false, 00:06:25.307 "allow_accel_sequence": false, 00:06:25.307 "rdma_max_cq_size": 0, 00:06:25.307 "rdma_cm_event_timeout_ms": 0, 00:06:25.307 "dhchap_digests": [ 00:06:25.307 "sha256", 00:06:25.307 "sha384", 00:06:25.307 "sha512" 00:06:25.307 ], 00:06:25.307 "dhchap_dhgroups": [ 00:06:25.307 "null", 00:06:25.307 "ffdhe2048", 00:06:25.307 "ffdhe3072", 00:06:25.307 "ffdhe4096", 00:06:25.307 "ffdhe6144", 00:06:25.307 "ffdhe8192" 00:06:25.307 ] 00:06:25.307 } 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "method": "bdev_nvme_set_hotplug", 00:06:25.307 "params": { 00:06:25.307 "period_us": 100000, 00:06:25.307 "enable": false 00:06:25.307 } 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "method": "bdev_wait_for_examine" 00:06:25.307 } 00:06:25.307 ] 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "subsystem": "scsi", 00:06:25.307 "config": null 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "subsystem": "scheduler", 00:06:25.307 "config": [ 00:06:25.307 { 00:06:25.307 "method": "framework_set_scheduler", 00:06:25.307 "params": { 00:06:25.307 "name": "static" 00:06:25.307 } 00:06:25.307 } 00:06:25.307 ] 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "subsystem": "vhost_scsi", 00:06:25.307 "config": [] 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "subsystem": "vhost_blk", 00:06:25.307 "config": [] 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "subsystem": "ublk", 00:06:25.307 "config": [] 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "subsystem": "nbd", 00:06:25.307 "config": [] 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "subsystem": "nvmf", 00:06:25.307 "config": [ 00:06:25.307 { 00:06:25.307 "method": "nvmf_set_config", 00:06:25.307 "params": { 00:06:25.307 "discovery_filter": "match_any", 00:06:25.307 "admin_cmd_passthru": { 00:06:25.307 "identify_ctrlr": false 00:06:25.307 }, 00:06:25.307 "dhchap_digests": [ 00:06:25.307 "sha256", 00:06:25.307 "sha384", 00:06:25.307 "sha512" 00:06:25.307 ], 00:06:25.307 "dhchap_dhgroups": [ 00:06:25.307 "null", 00:06:25.307 "ffdhe2048", 00:06:25.307 "ffdhe3072", 00:06:25.307 "ffdhe4096", 00:06:25.307 "ffdhe6144", 00:06:25.307 "ffdhe8192" 00:06:25.307 ] 00:06:25.307 } 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "method": "nvmf_set_max_subsystems", 00:06:25.307 "params": { 00:06:25.307 "max_subsystems": 1024 00:06:25.307 } 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "method": "nvmf_set_crdt", 00:06:25.307 "params": { 00:06:25.307 "crdt1": 0, 00:06:25.307 "crdt2": 0, 00:06:25.307 "crdt3": 0 00:06:25.307 } 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "method": "nvmf_create_transport", 00:06:25.307 "params": { 00:06:25.307 "trtype": "TCP", 00:06:25.307 "max_queue_depth": 128, 00:06:25.307 "max_io_qpairs_per_ctrlr": 127, 00:06:25.307 "in_capsule_data_size": 4096, 00:06:25.307 "max_io_size": 131072, 00:06:25.307 "io_unit_size": 131072, 00:06:25.307 "max_aq_depth": 128, 00:06:25.307 "num_shared_buffers": 511, 00:06:25.307 "buf_cache_size": 4294967295, 00:06:25.307 "dif_insert_or_strip": false, 00:06:25.307 "zcopy": false, 00:06:25.307 "c2h_success": true, 00:06:25.307 "sock_priority": 0, 00:06:25.307 "abort_timeout_sec": 1, 00:06:25.307 "ack_timeout": 0, 00:06:25.307 "data_wr_pool_size": 0 00:06:25.307 } 00:06:25.307 } 00:06:25.307 ] 00:06:25.307 }, 00:06:25.307 { 00:06:25.307 "subsystem": "iscsi", 00:06:25.307 "config": [ 00:06:25.307 { 00:06:25.307 "method": "iscsi_set_options", 00:06:25.307 "params": { 00:06:25.307 "node_base": "iqn.2016-06.io.spdk", 00:06:25.307 "max_sessions": 128, 00:06:25.307 "max_connections_per_session": 2, 00:06:25.307 "max_queue_depth": 64, 00:06:25.307 "default_time2wait": 2, 00:06:25.307 "default_time2retain": 20, 00:06:25.307 "first_burst_length": 8192, 00:06:25.307 "immediate_data": true, 00:06:25.307 "allow_duplicated_isid": false, 00:06:25.307 "error_recovery_level": 0, 00:06:25.307 "nop_timeout": 60, 00:06:25.307 "nop_in_interval": 30, 00:06:25.307 "disable_chap": false, 00:06:25.307 "require_chap": false, 00:06:25.307 "mutual_chap": false, 00:06:25.307 "chap_group": 0, 00:06:25.307 "max_large_datain_per_connection": 64, 00:06:25.307 "max_r2t_per_connection": 4, 00:06:25.307 "pdu_pool_size": 36864, 00:06:25.307 "immediate_data_pool_size": 16384, 00:06:25.307 "data_out_pool_size": 2048 00:06:25.307 } 00:06:25.307 } 00:06:25.307 ] 00:06:25.307 } 00:06:25.307 ] 00:06:25.307 } 00:06:25.307 16:41:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:25.307 16:41:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71146 00:06:25.307 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71146 ']' 00:06:25.307 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71146 00:06:25.307 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:25.307 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.307 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71146 00:06:25.307 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.307 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.308 killing process with pid 71146 00:06:25.308 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71146' 00:06:25.308 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71146 00:06:25.308 16:41:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71146 00:06:25.566 16:41:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71166 00:06:25.567 16:41:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:25.567 16:41:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71166 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71166 ']' 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71166 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71166 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71166' 00:06:30.837 killing process with pid 71166 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71166 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71166 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:30.837 00:06:30.837 real 0m6.106s 00:06:30.837 user 0m5.816s 00:06:30.837 sys 0m0.434s 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.837 ************************************ 00:06:30.837 END TEST skip_rpc_with_json 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.837 ************************************ 00:06:30.837 16:41:54 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:30.837 16:41:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.837 16:41:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.837 16:41:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.837 ************************************ 00:06:30.837 START TEST skip_rpc_with_delay 00:06:30.837 ************************************ 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:30.837 [2024-11-29 16:41:54.575281] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.837 00:06:30.837 real 0m0.124s 00:06:30.837 user 0m0.082s 00:06:30.837 sys 0m0.039s 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.837 16:41:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:30.837 ************************************ 00:06:30.837 END TEST skip_rpc_with_delay 00:06:30.837 ************************************ 00:06:31.097 16:41:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:31.097 16:41:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:31.097 16:41:54 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:31.097 16:41:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.097 16:41:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.097 16:41:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.097 ************************************ 00:06:31.097 START TEST exit_on_failed_rpc_init 00:06:31.097 ************************************ 00:06:31.097 16:41:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:31.097 16:41:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.097 16:41:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71276 00:06:31.097 16:41:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71276 00:06:31.097 16:41:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71276 ']' 00:06:31.097 16:41:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.097 16:41:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.097 16:41:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.097 16:41:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.097 16:41:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:31.097 [2024-11-29 16:41:54.723731] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:31.097 [2024-11-29 16:41:54.723843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71276 ] 00:06:31.097 [2024-11-29 16:41:54.850298] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.097 [2024-11-29 16:41:54.879718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.356 [2024-11-29 16:41:54.901984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.356 [2024-11-29 16:41:54.938976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.356 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.356 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:31.356 16:41:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.356 16:41:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:31.356 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:31.356 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:31.356 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:31.356 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.356 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:31.356 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.356 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:31.356 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.357 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:31.357 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:31.357 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:31.357 [2024-11-29 16:41:55.137633] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:31.357 [2024-11-29 16:41:55.137746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71286 ] 00:06:31.615 [2024-11-29 16:41:55.263783] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.615 [2024-11-29 16:41:55.287939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.615 [2024-11-29 16:41:55.306968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.615 [2024-11-29 16:41:55.307093] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:31.615 [2024-11-29 16:41:55.307105] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:31.615 [2024-11-29 16:41:55.307112] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71276 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71276 ']' 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71276 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71276 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.615 killing process with pid 71276 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71276' 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71276 00:06:31.615 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71276 00:06:31.873 00:06:31.873 real 0m0.962s 00:06:31.873 user 0m1.108s 00:06:31.873 sys 0m0.273s 00:06:31.873 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.873 16:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:31.873 ************************************ 00:06:31.873 END TEST exit_on_failed_rpc_init 00:06:31.873 ************************************ 00:06:31.873 16:41:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:31.873 00:06:31.873 real 0m12.845s 00:06:31.873 user 0m12.184s 00:06:31.873 sys 0m1.129s 00:06:31.873 16:41:55 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.873 ************************************ 00:06:31.873 END TEST skip_rpc 00:06:31.873 ************************************ 00:06:31.873 16:41:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.132 16:41:55 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:32.132 16:41:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.132 16:41:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.132 16:41:55 -- common/autotest_common.sh@10 -- # set +x 00:06:32.132 ************************************ 00:06:32.132 START TEST rpc_client 00:06:32.132 ************************************ 00:06:32.132 16:41:55 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:32.132 * Looking for test storage... 00:06:32.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:32.132 16:41:55 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:32.132 16:41:55 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:32.132 16:41:55 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:32.132 16:41:55 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.132 16:41:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.133 16:41:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:32.133 16:41:55 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.133 16:41:55 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:32.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.133 --rc genhtml_branch_coverage=1 00:06:32.133 --rc genhtml_function_coverage=1 00:06:32.133 --rc genhtml_legend=1 00:06:32.133 --rc geninfo_all_blocks=1 00:06:32.133 --rc geninfo_unexecuted_blocks=1 00:06:32.133 00:06:32.133 ' 00:06:32.133 16:41:55 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:32.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.133 --rc genhtml_branch_coverage=1 00:06:32.133 --rc genhtml_function_coverage=1 00:06:32.133 --rc genhtml_legend=1 00:06:32.133 --rc geninfo_all_blocks=1 00:06:32.133 --rc geninfo_unexecuted_blocks=1 00:06:32.133 00:06:32.133 ' 00:06:32.133 16:41:55 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:32.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.133 --rc genhtml_branch_coverage=1 00:06:32.133 --rc genhtml_function_coverage=1 00:06:32.133 --rc genhtml_legend=1 00:06:32.133 --rc geninfo_all_blocks=1 00:06:32.133 --rc geninfo_unexecuted_blocks=1 00:06:32.133 00:06:32.133 ' 00:06:32.133 16:41:55 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:32.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.133 --rc genhtml_branch_coverage=1 00:06:32.133 --rc genhtml_function_coverage=1 00:06:32.133 --rc genhtml_legend=1 00:06:32.133 --rc geninfo_all_blocks=1 00:06:32.133 --rc geninfo_unexecuted_blocks=1 00:06:32.133 00:06:32.133 ' 00:06:32.133 16:41:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:32.133 OK 00:06:32.392 16:41:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:32.392 00:06:32.392 real 0m0.202s 00:06:32.392 user 0m0.108s 00:06:32.392 sys 0m0.101s 00:06:32.392 16:41:55 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.392 16:41:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 ************************************ 00:06:32.392 END TEST rpc_client 00:06:32.392 ************************************ 00:06:32.392 16:41:55 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:32.392 16:41:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.392 16:41:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.392 16:41:55 -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 ************************************ 00:06:32.392 START TEST json_config 00:06:32.392 ************************************ 00:06:32.392 16:41:55 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:32.392 16:41:56 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:32.392 16:41:56 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:32.392 16:41:56 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:32.392 16:41:56 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:32.392 16:41:56 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.392 16:41:56 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.392 16:41:56 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.392 16:41:56 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.392 16:41:56 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.392 16:41:56 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.392 16:41:56 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.392 16:41:56 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.392 16:41:56 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.392 16:41:56 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.392 16:41:56 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.392 16:41:56 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:32.392 16:41:56 json_config -- scripts/common.sh@345 -- # : 1 00:06:32.392 16:41:56 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.392 16:41:56 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.392 16:41:56 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:32.392 16:41:56 json_config -- scripts/common.sh@353 -- # local d=1 00:06:32.392 16:41:56 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.392 16:41:56 json_config -- scripts/common.sh@355 -- # echo 1 00:06:32.392 16:41:56 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.392 16:41:56 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:32.392 16:41:56 json_config -- scripts/common.sh@353 -- # local d=2 00:06:32.392 16:41:56 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.392 16:41:56 json_config -- scripts/common.sh@355 -- # echo 2 00:06:32.392 16:41:56 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.392 16:41:56 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.392 16:41:56 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.392 16:41:56 json_config -- scripts/common.sh@368 -- # return 0 00:06:32.392 16:41:56 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.392 16:41:56 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:32.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.392 --rc genhtml_branch_coverage=1 00:06:32.392 --rc genhtml_function_coverage=1 00:06:32.392 --rc genhtml_legend=1 00:06:32.392 --rc geninfo_all_blocks=1 00:06:32.392 --rc geninfo_unexecuted_blocks=1 00:06:32.392 00:06:32.392 ' 00:06:32.392 16:41:56 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:32.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.392 --rc genhtml_branch_coverage=1 00:06:32.392 --rc genhtml_function_coverage=1 00:06:32.392 --rc genhtml_legend=1 00:06:32.392 --rc geninfo_all_blocks=1 00:06:32.392 --rc geninfo_unexecuted_blocks=1 00:06:32.392 00:06:32.392 ' 00:06:32.392 16:41:56 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:32.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.392 --rc genhtml_branch_coverage=1 00:06:32.392 --rc genhtml_function_coverage=1 00:06:32.392 --rc genhtml_legend=1 00:06:32.392 --rc geninfo_all_blocks=1 00:06:32.392 --rc geninfo_unexecuted_blocks=1 00:06:32.392 00:06:32.392 ' 00:06:32.392 16:41:56 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:32.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.393 --rc genhtml_branch_coverage=1 00:06:32.393 --rc genhtml_function_coverage=1 00:06:32.393 --rc genhtml_legend=1 00:06:32.393 --rc geninfo_all_blocks=1 00:06:32.393 --rc geninfo_unexecuted_blocks=1 00:06:32.393 00:06:32.393 ' 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:32.393 16:41:56 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.393 16:41:56 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.393 16:41:56 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.393 16:41:56 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.393 16:41:56 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.393 16:41:56 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.393 16:41:56 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.393 16:41:56 json_config -- paths/export.sh@5 -- # export PATH 00:06:32.393 16:41:56 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@51 -- # : 0 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:32.393 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:32.393 16:41:56 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:32.393 INFO: JSON configuration test init 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:32.393 16:41:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.393 16:41:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:32.393 16:41:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.393 16:41:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.393 16:41:56 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:32.393 16:41:56 json_config -- json_config/common.sh@9 -- # local app=target 00:06:32.393 16:41:56 json_config -- json_config/common.sh@10 -- # shift 00:06:32.393 16:41:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:32.393 16:41:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:32.393 16:41:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:32.393 16:41:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:32.393 16:41:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:32.393 16:41:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71420 00:06:32.393 16:41:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:32.393 Waiting for target to run... 00:06:32.393 16:41:56 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:32.393 16:41:56 json_config -- json_config/common.sh@25 -- # waitforlisten 71420 /var/tmp/spdk_tgt.sock 00:06:32.393 16:41:56 json_config -- common/autotest_common.sh@835 -- # '[' -z 71420 ']' 00:06:32.393 16:41:56 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:32.393 16:41:56 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:32.393 16:41:56 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:32.393 16:41:56 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.393 16:41:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.653 [2024-11-29 16:41:56.255970] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:32.653 [2024-11-29 16:41:56.256297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71420 ] 00:06:32.911 [2024-11-29 16:41:56.541160] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.911 [2024-11-29 16:41:56.574921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.911 [2024-11-29 16:41:56.590993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.874 16:41:57 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.874 00:06:33.874 16:41:57 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:33.874 16:41:57 json_config -- json_config/common.sh@26 -- # echo '' 00:06:33.874 16:41:57 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:33.874 16:41:57 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:33.874 16:41:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.874 16:41:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.874 16:41:57 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:33.874 16:41:57 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:33.874 16:41:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.874 16:41:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.874 16:41:57 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:33.874 16:41:57 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:33.874 16:41:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:34.134 [2024-11-29 16:41:57.713390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.134 16:41:57 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:34.134 16:41:57 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:34.134 16:41:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.134 16:41:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.134 16:41:57 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:34.134 16:41:57 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:34.134 16:41:57 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:34.134 16:41:57 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:34.134 16:41:57 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:34.134 16:41:57 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:34.134 16:41:57 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:34.134 16:41:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:34.392 16:41:58 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:34.392 16:41:58 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:34.392 16:41:58 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:34.392 16:41:58 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:34.392 16:41:58 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:34.392 16:41:58 json_config -- json_config/json_config.sh@54 -- # sort 00:06:34.392 16:41:58 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:34.392 16:41:58 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:34.392 16:41:58 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:34.392 16:41:58 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:34.393 16:41:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:34.393 16:41:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.393 16:41:58 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:34.393 16:41:58 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:34.393 16:41:58 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:34.393 16:41:58 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:34.393 16:41:58 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:34.393 16:41:58 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:34.393 16:41:58 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:34.393 16:41:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.393 16:41:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.393 16:41:58 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:34.393 16:41:58 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:34.393 16:41:58 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:34.393 16:41:58 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:34.393 16:41:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:34.651 MallocForNvmf0 00:06:34.651 16:41:58 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:34.652 16:41:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:35.220 MallocForNvmf1 00:06:35.220 16:41:58 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:35.220 16:41:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:35.220 [2024-11-29 16:41:58.946006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.220 16:41:58 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:35.220 16:41:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:35.480 16:41:59 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:35.480 16:41:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:35.738 16:41:59 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:35.738 16:41:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:35.997 16:41:59 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:35.997 16:41:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:36.255 [2024-11-29 16:41:59.970553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:36.255 16:41:59 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:36.255 16:41:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.255 16:41:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.255 16:42:00 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:36.255 16:42:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.255 16:42:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.514 16:42:00 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:36.514 16:42:00 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:36.514 16:42:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:36.773 MallocBdevForConfigChangeCheck 00:06:36.773 16:42:00 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:36.773 16:42:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.774 16:42:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.774 16:42:00 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:36.774 16:42:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:37.342 INFO: shutting down applications... 00:06:37.342 16:42:00 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:37.342 16:42:00 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:37.342 16:42:00 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:37.342 16:42:00 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:37.342 16:42:00 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:37.599 Calling clear_iscsi_subsystem 00:06:37.599 Calling clear_nvmf_subsystem 00:06:37.599 Calling clear_nbd_subsystem 00:06:37.599 Calling clear_ublk_subsystem 00:06:37.599 Calling clear_vhost_blk_subsystem 00:06:37.599 Calling clear_vhost_scsi_subsystem 00:06:37.599 Calling clear_bdev_subsystem 00:06:37.599 16:42:01 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:37.599 16:42:01 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:37.599 16:42:01 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:37.599 16:42:01 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:37.599 16:42:01 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:37.599 16:42:01 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:37.858 16:42:01 json_config -- json_config/json_config.sh@352 -- # break 00:06:37.858 16:42:01 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:37.858 16:42:01 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:37.858 16:42:01 json_config -- json_config/common.sh@31 -- # local app=target 00:06:37.858 16:42:01 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:37.858 16:42:01 json_config -- json_config/common.sh@35 -- # [[ -n 71420 ]] 00:06:37.858 16:42:01 json_config -- json_config/common.sh@38 -- # kill -SIGINT 71420 00:06:37.858 16:42:01 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:37.858 16:42:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.858 16:42:01 json_config -- json_config/common.sh@41 -- # kill -0 71420 00:06:37.858 16:42:01 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:38.426 16:42:02 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:38.426 16:42:02 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.426 16:42:02 json_config -- json_config/common.sh@41 -- # kill -0 71420 00:06:38.426 16:42:02 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:38.426 16:42:02 json_config -- json_config/common.sh@43 -- # break 00:06:38.426 16:42:02 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:38.426 SPDK target shutdown done 00:06:38.426 INFO: relaunching applications... 00:06:38.426 16:42:02 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:38.426 16:42:02 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:38.426 16:42:02 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:38.426 16:42:02 json_config -- json_config/common.sh@9 -- # local app=target 00:06:38.426 16:42:02 json_config -- json_config/common.sh@10 -- # shift 00:06:38.426 16:42:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:38.426 16:42:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:38.426 16:42:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:38.426 16:42:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:38.426 16:42:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:38.426 16:42:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71616 00:06:38.426 Waiting for target to run... 00:06:38.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:38.426 16:42:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:38.426 16:42:02 json_config -- json_config/common.sh@25 -- # waitforlisten 71616 /var/tmp/spdk_tgt.sock 00:06:38.426 16:42:02 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:38.426 16:42:02 json_config -- common/autotest_common.sh@835 -- # '[' -z 71616 ']' 00:06:38.426 16:42:02 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:38.426 16:42:02 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.426 16:42:02 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:38.426 16:42:02 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.426 16:42:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.426 [2024-11-29 16:42:02.165366] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:38.426 [2024-11-29 16:42:02.166188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71616 ] 00:06:38.685 [2024-11-29 16:42:02.460443] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:38.943 [2024-11-29 16:42:02.488703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.943 [2024-11-29 16:42:02.500452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.943 [2024-11-29 16:42:02.629576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.201 [2024-11-29 16:42:02.820323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.201 [2024-11-29 16:42:02.852391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:39.460 00:06:39.460 INFO: Checking if target configuration is the same... 00:06:39.460 16:42:03 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.460 16:42:03 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:39.460 16:42:03 json_config -- json_config/common.sh@26 -- # echo '' 00:06:39.460 16:42:03 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:39.460 16:42:03 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:39.460 16:42:03 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:39.460 16:42:03 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:39.460 16:42:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:39.460 + '[' 2 -ne 2 ']' 00:06:39.460 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:39.460 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:39.460 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:39.460 +++ basename /dev/fd/62 00:06:39.460 ++ mktemp /tmp/62.XXX 00:06:39.460 + tmp_file_1=/tmp/62.Dqh 00:06:39.460 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:39.460 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:39.460 + tmp_file_2=/tmp/spdk_tgt_config.json.i60 00:06:39.460 + ret=0 00:06:39.460 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:40.027 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:40.027 + diff -u /tmp/62.Dqh /tmp/spdk_tgt_config.json.i60 00:06:40.027 INFO: JSON config files are the same 00:06:40.027 + echo 'INFO: JSON config files are the same' 00:06:40.027 + rm /tmp/62.Dqh /tmp/spdk_tgt_config.json.i60 00:06:40.027 + exit 0 00:06:40.027 16:42:03 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:40.028 INFO: changing configuration and checking if this can be detected... 00:06:40.028 16:42:03 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:40.028 16:42:03 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:40.028 16:42:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:40.286 16:42:03 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:40.286 16:42:03 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:40.286 16:42:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:40.286 + '[' 2 -ne 2 ']' 00:06:40.286 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:40.286 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:40.286 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:40.286 +++ basename /dev/fd/62 00:06:40.286 ++ mktemp /tmp/62.XXX 00:06:40.286 + tmp_file_1=/tmp/62.wnX 00:06:40.286 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:40.286 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:40.286 + tmp_file_2=/tmp/spdk_tgt_config.json.wjj 00:06:40.286 + ret=0 00:06:40.286 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:40.545 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:40.804 + diff -u /tmp/62.wnX /tmp/spdk_tgt_config.json.wjj 00:06:40.804 + ret=1 00:06:40.804 + echo '=== Start of file: /tmp/62.wnX ===' 00:06:40.804 + cat /tmp/62.wnX 00:06:40.804 + echo '=== End of file: /tmp/62.wnX ===' 00:06:40.804 + echo '' 00:06:40.804 + echo '=== Start of file: /tmp/spdk_tgt_config.json.wjj ===' 00:06:40.804 + cat /tmp/spdk_tgt_config.json.wjj 00:06:40.804 + echo '=== End of file: /tmp/spdk_tgt_config.json.wjj ===' 00:06:40.804 + echo '' 00:06:40.804 + rm /tmp/62.wnX /tmp/spdk_tgt_config.json.wjj 00:06:40.804 + exit 1 00:06:40.804 INFO: configuration change detected. 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@324 -- # [[ -n 71616 ]] 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.804 16:42:04 json_config -- json_config/json_config.sh@330 -- # killprocess 71616 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@954 -- # '[' -z 71616 ']' 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@958 -- # kill -0 71616 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@959 -- # uname 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71616 00:06:40.804 killing process with pid 71616 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71616' 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@973 -- # kill 71616 00:06:40.804 16:42:04 json_config -- common/autotest_common.sh@978 -- # wait 71616 00:06:41.063 16:42:04 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:41.063 16:42:04 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:41.063 16:42:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.063 16:42:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.063 INFO: Success 00:06:41.063 16:42:04 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:41.063 16:42:04 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:41.063 ************************************ 00:06:41.063 END TEST json_config 00:06:41.063 ************************************ 00:06:41.063 00:06:41.063 real 0m8.711s 00:06:41.063 user 0m12.818s 00:06:41.063 sys 0m1.459s 00:06:41.063 16:42:04 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.063 16:42:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.063 16:42:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:41.063 16:42:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.063 16:42:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.063 16:42:04 -- common/autotest_common.sh@10 -- # set +x 00:06:41.063 ************************************ 00:06:41.063 START TEST json_config_extra_key 00:06:41.063 ************************************ 00:06:41.063 16:42:04 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:41.063 16:42:04 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.063 16:42:04 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.063 16:42:04 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.323 16:42:04 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.323 16:42:04 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:41.323 16:42:04 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.323 16:42:04 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.323 --rc genhtml_branch_coverage=1 00:06:41.323 --rc genhtml_function_coverage=1 00:06:41.323 --rc genhtml_legend=1 00:06:41.323 --rc geninfo_all_blocks=1 00:06:41.323 --rc geninfo_unexecuted_blocks=1 00:06:41.323 00:06:41.323 ' 00:06:41.323 16:42:04 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.323 --rc genhtml_branch_coverage=1 00:06:41.323 --rc genhtml_function_coverage=1 00:06:41.323 --rc genhtml_legend=1 00:06:41.323 --rc geninfo_all_blocks=1 00:06:41.323 --rc geninfo_unexecuted_blocks=1 00:06:41.323 00:06:41.323 ' 00:06:41.323 16:42:04 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.323 --rc genhtml_branch_coverage=1 00:06:41.323 --rc genhtml_function_coverage=1 00:06:41.323 --rc genhtml_legend=1 00:06:41.323 --rc geninfo_all_blocks=1 00:06:41.323 --rc geninfo_unexecuted_blocks=1 00:06:41.323 00:06:41.323 ' 00:06:41.323 16:42:04 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.324 --rc genhtml_branch_coverage=1 00:06:41.324 --rc genhtml_function_coverage=1 00:06:41.324 --rc genhtml_legend=1 00:06:41.324 --rc geninfo_all_blocks=1 00:06:41.324 --rc geninfo_unexecuted_blocks=1 00:06:41.324 00:06:41.324 ' 00:06:41.324 16:42:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:41.324 16:42:04 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.324 16:42:04 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.324 16:42:04 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.324 16:42:04 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.324 16:42:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.324 16:42:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.324 16:42:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.324 16:42:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:41.324 16:42:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.324 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.324 16:42:04 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.324 16:42:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:41.324 16:42:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:41.324 INFO: launching applications... 00:06:41.324 16:42:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:41.324 16:42:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:41.324 16:42:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:41.324 16:42:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:41.324 16:42:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:41.324 16:42:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:41.324 16:42:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:41.324 16:42:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:41.324 16:42:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:41.324 16:42:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:41.324 16:42:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:41.324 16:42:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:41.324 16:42:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:41.324 16:42:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:41.324 16:42:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:41.324 16:42:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.324 16:42:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.324 16:42:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71770 00:06:41.324 16:42:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:41.324 Waiting for target to run... 00:06:41.324 16:42:04 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:41.324 16:42:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71770 /var/tmp/spdk_tgt.sock 00:06:41.324 16:42:04 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71770 ']' 00:06:41.324 16:42:04 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:41.324 16:42:04 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.324 16:42:04 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:41.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:41.324 16:42:04 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.324 16:42:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:41.324 [2024-11-29 16:42:04.992128] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:41.324 [2024-11-29 16:42:04.992466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71770 ] 00:06:41.582 [2024-11-29 16:42:05.289545] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:41.582 [2024-11-29 16:42:05.314659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.582 [2024-11-29 16:42:05.327706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.582 [2024-11-29 16:42:05.350221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.515 00:06:42.515 INFO: shutting down applications... 00:06:42.515 16:42:06 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.515 16:42:06 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:42.515 16:42:06 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:42.515 16:42:06 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:42.515 16:42:06 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:42.515 16:42:06 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:42.515 16:42:06 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:42.515 16:42:06 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71770 ]] 00:06:42.515 16:42:06 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71770 00:06:42.515 16:42:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:42.515 16:42:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.515 16:42:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71770 00:06:42.515 16:42:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:42.774 16:42:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:42.774 16:42:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.774 16:42:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71770 00:06:42.774 16:42:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:42.774 16:42:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:42.774 16:42:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:42.774 16:42:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:42.774 SPDK target shutdown done 00:06:42.775 16:42:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:42.775 Success 00:06:42.775 00:06:42.775 real 0m1.794s 00:06:42.775 user 0m1.663s 00:06:42.775 sys 0m0.335s 00:06:42.775 16:42:06 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.775 ************************************ 00:06:42.775 END TEST json_config_extra_key 00:06:42.775 ************************************ 00:06:42.775 16:42:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:43.034 16:42:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:43.034 16:42:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.034 16:42:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.034 16:42:06 -- common/autotest_common.sh@10 -- # set +x 00:06:43.034 ************************************ 00:06:43.034 START TEST alias_rpc 00:06:43.034 ************************************ 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:43.034 * Looking for test storage... 00:06:43.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:43.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.034 16:42:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.034 --rc genhtml_branch_coverage=1 00:06:43.034 --rc genhtml_function_coverage=1 00:06:43.034 --rc genhtml_legend=1 00:06:43.034 --rc geninfo_all_blocks=1 00:06:43.034 --rc geninfo_unexecuted_blocks=1 00:06:43.034 00:06:43.034 ' 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.034 --rc genhtml_branch_coverage=1 00:06:43.034 --rc genhtml_function_coverage=1 00:06:43.034 --rc genhtml_legend=1 00:06:43.034 --rc geninfo_all_blocks=1 00:06:43.034 --rc geninfo_unexecuted_blocks=1 00:06:43.034 00:06:43.034 ' 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.034 --rc genhtml_branch_coverage=1 00:06:43.034 --rc genhtml_function_coverage=1 00:06:43.034 --rc genhtml_legend=1 00:06:43.034 --rc geninfo_all_blocks=1 00:06:43.034 --rc geninfo_unexecuted_blocks=1 00:06:43.034 00:06:43.034 ' 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.034 --rc genhtml_branch_coverage=1 00:06:43.034 --rc genhtml_function_coverage=1 00:06:43.034 --rc genhtml_legend=1 00:06:43.034 --rc geninfo_all_blocks=1 00:06:43.034 --rc geninfo_unexecuted_blocks=1 00:06:43.034 00:06:43.034 ' 00:06:43.034 16:42:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:43.034 16:42:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71848 00:06:43.034 16:42:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71848 00:06:43.034 16:42:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71848 ']' 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.034 16:42:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.294 [2024-11-29 16:42:06.839600] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:43.294 [2024-11-29 16:42:06.840510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71848 ] 00:06:43.294 [2024-11-29 16:42:06.966618] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.294 [2024-11-29 16:42:06.989882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.294 [2024-11-29 16:42:07.009257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.294 [2024-11-29 16:42:07.044003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.553 16:42:07 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.553 16:42:07 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:43.553 16:42:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:43.812 16:42:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71848 00:06:43.812 16:42:07 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71848 ']' 00:06:43.812 16:42:07 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71848 00:06:43.812 16:42:07 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:43.812 16:42:07 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.812 16:42:07 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71848 00:06:43.812 killing process with pid 71848 00:06:43.812 16:42:07 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.812 16:42:07 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.812 16:42:07 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71848' 00:06:43.812 16:42:07 alias_rpc -- common/autotest_common.sh@973 -- # kill 71848 00:06:43.812 16:42:07 alias_rpc -- common/autotest_common.sh@978 -- # wait 71848 00:06:44.070 ************************************ 00:06:44.070 END TEST alias_rpc 00:06:44.070 ************************************ 00:06:44.070 00:06:44.070 real 0m1.145s 00:06:44.070 user 0m1.322s 00:06:44.070 sys 0m0.328s 00:06:44.070 16:42:07 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.070 16:42:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.070 16:42:07 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:44.070 16:42:07 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:44.070 16:42:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.070 16:42:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.070 16:42:07 -- common/autotest_common.sh@10 -- # set +x 00:06:44.070 ************************************ 00:06:44.070 START TEST spdkcli_tcp 00:06:44.070 ************************************ 00:06:44.070 16:42:07 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:44.070 * Looking for test storage... 00:06:44.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.329 16:42:07 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.329 --rc genhtml_branch_coverage=1 00:06:44.329 --rc genhtml_function_coverage=1 00:06:44.329 --rc genhtml_legend=1 00:06:44.329 --rc geninfo_all_blocks=1 00:06:44.329 --rc geninfo_unexecuted_blocks=1 00:06:44.329 00:06:44.329 ' 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.329 --rc genhtml_branch_coverage=1 00:06:44.329 --rc genhtml_function_coverage=1 00:06:44.329 --rc genhtml_legend=1 00:06:44.329 --rc geninfo_all_blocks=1 00:06:44.329 --rc geninfo_unexecuted_blocks=1 00:06:44.329 00:06:44.329 ' 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.329 --rc genhtml_branch_coverage=1 00:06:44.329 --rc genhtml_function_coverage=1 00:06:44.329 --rc genhtml_legend=1 00:06:44.329 --rc geninfo_all_blocks=1 00:06:44.329 --rc geninfo_unexecuted_blocks=1 00:06:44.329 00:06:44.329 ' 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.329 --rc genhtml_branch_coverage=1 00:06:44.329 --rc genhtml_function_coverage=1 00:06:44.329 --rc genhtml_legend=1 00:06:44.329 --rc geninfo_all_blocks=1 00:06:44.329 --rc geninfo_unexecuted_blocks=1 00:06:44.329 00:06:44.329 ' 00:06:44.329 16:42:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:44.329 16:42:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:44.329 16:42:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:44.329 16:42:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:44.329 16:42:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:44.329 16:42:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:44.329 16:42:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.329 16:42:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71919 00:06:44.329 16:42:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:44.329 16:42:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71919 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71919 ']' 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.329 16:42:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.329 [2024-11-29 16:42:08.020991] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:44.329 [2024-11-29 16:42:08.021300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71919 ] 00:06:44.588 [2024-11-29 16:42:08.147163] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:44.588 [2024-11-29 16:42:08.172590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.588 [2024-11-29 16:42:08.192022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.588 [2024-11-29 16:42:08.192031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.588 [2024-11-29 16:42:08.227242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.588 16:42:08 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.588 16:42:08 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:44.588 16:42:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71923 00:06:44.588 16:42:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:44.588 16:42:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:44.848 [ 00:06:44.848 "bdev_malloc_delete", 00:06:44.848 "bdev_malloc_create", 00:06:44.848 "bdev_null_resize", 00:06:44.848 "bdev_null_delete", 00:06:44.848 "bdev_null_create", 00:06:44.848 "bdev_nvme_cuse_unregister", 00:06:44.848 "bdev_nvme_cuse_register", 00:06:44.848 "bdev_opal_new_user", 00:06:44.848 "bdev_opal_set_lock_state", 00:06:44.848 "bdev_opal_delete", 00:06:44.848 "bdev_opal_get_info", 00:06:44.848 "bdev_opal_create", 00:06:44.848 "bdev_nvme_opal_revert", 00:06:44.848 "bdev_nvme_opal_init", 00:06:44.848 "bdev_nvme_send_cmd", 00:06:44.848 "bdev_nvme_set_keys", 00:06:44.848 "bdev_nvme_get_path_iostat", 00:06:44.848 "bdev_nvme_get_mdns_discovery_info", 00:06:44.848 "bdev_nvme_stop_mdns_discovery", 00:06:44.848 "bdev_nvme_start_mdns_discovery", 00:06:44.848 "bdev_nvme_set_multipath_policy", 00:06:44.848 "bdev_nvme_set_preferred_path", 00:06:44.848 "bdev_nvme_get_io_paths", 00:06:44.848 "bdev_nvme_remove_error_injection", 00:06:44.848 "bdev_nvme_add_error_injection", 00:06:44.848 "bdev_nvme_get_discovery_info", 00:06:44.848 "bdev_nvme_stop_discovery", 00:06:44.848 "bdev_nvme_start_discovery", 00:06:44.848 "bdev_nvme_get_controller_health_info", 00:06:44.848 "bdev_nvme_disable_controller", 00:06:44.848 "bdev_nvme_enable_controller", 00:06:44.848 "bdev_nvme_reset_controller", 00:06:44.848 "bdev_nvme_get_transport_statistics", 00:06:44.848 "bdev_nvme_apply_firmware", 00:06:44.848 "bdev_nvme_detach_controller", 00:06:44.848 "bdev_nvme_get_controllers", 00:06:44.848 "bdev_nvme_attach_controller", 00:06:44.848 "bdev_nvme_set_hotplug", 00:06:44.848 "bdev_nvme_set_options", 00:06:44.848 "bdev_passthru_delete", 00:06:44.848 "bdev_passthru_create", 00:06:44.848 "bdev_lvol_set_parent_bdev", 00:06:44.848 "bdev_lvol_set_parent", 00:06:44.848 "bdev_lvol_check_shallow_copy", 00:06:44.848 "bdev_lvol_start_shallow_copy", 00:06:44.848 "bdev_lvol_grow_lvstore", 00:06:44.848 "bdev_lvol_get_lvols", 00:06:44.848 "bdev_lvol_get_lvstores", 00:06:44.848 "bdev_lvol_delete", 00:06:44.848 "bdev_lvol_set_read_only", 00:06:44.848 "bdev_lvol_resize", 00:06:44.848 "bdev_lvol_decouple_parent", 00:06:44.848 "bdev_lvol_inflate", 00:06:44.848 "bdev_lvol_rename", 00:06:44.848 "bdev_lvol_clone_bdev", 00:06:44.848 "bdev_lvol_clone", 00:06:44.848 "bdev_lvol_snapshot", 00:06:44.848 "bdev_lvol_create", 00:06:44.848 "bdev_lvol_delete_lvstore", 00:06:44.848 "bdev_lvol_rename_lvstore", 00:06:44.848 "bdev_lvol_create_lvstore", 00:06:44.848 "bdev_raid_set_options", 00:06:44.848 "bdev_raid_remove_base_bdev", 00:06:44.848 "bdev_raid_add_base_bdev", 00:06:44.848 "bdev_raid_delete", 00:06:44.848 "bdev_raid_create", 00:06:44.848 "bdev_raid_get_bdevs", 00:06:44.848 "bdev_error_inject_error", 00:06:44.848 "bdev_error_delete", 00:06:44.848 "bdev_error_create", 00:06:44.848 "bdev_split_delete", 00:06:44.848 "bdev_split_create", 00:06:44.848 "bdev_delay_delete", 00:06:44.848 "bdev_delay_create", 00:06:44.848 "bdev_delay_update_latency", 00:06:44.848 "bdev_zone_block_delete", 00:06:44.848 "bdev_zone_block_create", 00:06:44.848 "blobfs_create", 00:06:44.848 "blobfs_detect", 00:06:44.848 "blobfs_set_cache_size", 00:06:44.848 "bdev_aio_delete", 00:06:44.848 "bdev_aio_rescan", 00:06:44.848 "bdev_aio_create", 00:06:44.848 "bdev_ftl_set_property", 00:06:44.848 "bdev_ftl_get_properties", 00:06:44.848 "bdev_ftl_get_stats", 00:06:44.848 "bdev_ftl_unmap", 00:06:44.848 "bdev_ftl_unload", 00:06:44.848 "bdev_ftl_delete", 00:06:44.848 "bdev_ftl_load", 00:06:44.848 "bdev_ftl_create", 00:06:44.848 "bdev_virtio_attach_controller", 00:06:44.848 "bdev_virtio_scsi_get_devices", 00:06:44.848 "bdev_virtio_detach_controller", 00:06:44.848 "bdev_virtio_blk_set_hotplug", 00:06:44.848 "bdev_iscsi_delete", 00:06:44.848 "bdev_iscsi_create", 00:06:44.848 "bdev_iscsi_set_options", 00:06:44.848 "bdev_uring_delete", 00:06:44.848 "bdev_uring_rescan", 00:06:44.848 "bdev_uring_create", 00:06:44.848 "accel_error_inject_error", 00:06:44.848 "ioat_scan_accel_module", 00:06:44.848 "dsa_scan_accel_module", 00:06:44.848 "iaa_scan_accel_module", 00:06:44.848 "keyring_file_remove_key", 00:06:44.848 "keyring_file_add_key", 00:06:44.848 "keyring_linux_set_options", 00:06:44.848 "fsdev_aio_delete", 00:06:44.848 "fsdev_aio_create", 00:06:44.848 "iscsi_get_histogram", 00:06:44.848 "iscsi_enable_histogram", 00:06:44.848 "iscsi_set_options", 00:06:44.848 "iscsi_get_auth_groups", 00:06:44.848 "iscsi_auth_group_remove_secret", 00:06:44.848 "iscsi_auth_group_add_secret", 00:06:44.848 "iscsi_delete_auth_group", 00:06:44.848 "iscsi_create_auth_group", 00:06:44.848 "iscsi_set_discovery_auth", 00:06:44.848 "iscsi_get_options", 00:06:44.848 "iscsi_target_node_request_logout", 00:06:44.848 "iscsi_target_node_set_redirect", 00:06:44.848 "iscsi_target_node_set_auth", 00:06:44.848 "iscsi_target_node_add_lun", 00:06:44.848 "iscsi_get_stats", 00:06:44.848 "iscsi_get_connections", 00:06:44.848 "iscsi_portal_group_set_auth", 00:06:44.848 "iscsi_start_portal_group", 00:06:44.848 "iscsi_delete_portal_group", 00:06:44.848 "iscsi_create_portal_group", 00:06:44.848 "iscsi_get_portal_groups", 00:06:44.848 "iscsi_delete_target_node", 00:06:44.848 "iscsi_target_node_remove_pg_ig_maps", 00:06:44.848 "iscsi_target_node_add_pg_ig_maps", 00:06:44.848 "iscsi_create_target_node", 00:06:44.848 "iscsi_get_target_nodes", 00:06:44.848 "iscsi_delete_initiator_group", 00:06:44.849 "iscsi_initiator_group_remove_initiators", 00:06:44.849 "iscsi_initiator_group_add_initiators", 00:06:44.849 "iscsi_create_initiator_group", 00:06:44.849 "iscsi_get_initiator_groups", 00:06:44.849 "nvmf_set_crdt", 00:06:44.849 "nvmf_set_config", 00:06:44.849 "nvmf_set_max_subsystems", 00:06:44.849 "nvmf_stop_mdns_prr", 00:06:44.849 "nvmf_publish_mdns_prr", 00:06:44.849 "nvmf_subsystem_get_listeners", 00:06:44.849 "nvmf_subsystem_get_qpairs", 00:06:44.849 "nvmf_subsystem_get_controllers", 00:06:44.849 "nvmf_get_stats", 00:06:44.849 "nvmf_get_transports", 00:06:44.849 "nvmf_create_transport", 00:06:44.849 "nvmf_get_targets", 00:06:44.849 "nvmf_delete_target", 00:06:44.849 "nvmf_create_target", 00:06:44.849 "nvmf_subsystem_allow_any_host", 00:06:44.849 "nvmf_subsystem_set_keys", 00:06:44.849 "nvmf_subsystem_remove_host", 00:06:44.849 "nvmf_subsystem_add_host", 00:06:44.849 "nvmf_ns_remove_host", 00:06:44.849 "nvmf_ns_add_host", 00:06:44.849 "nvmf_subsystem_remove_ns", 00:06:44.849 "nvmf_subsystem_set_ns_ana_group", 00:06:44.849 "nvmf_subsystem_add_ns", 00:06:44.849 "nvmf_subsystem_listener_set_ana_state", 00:06:44.849 "nvmf_discovery_get_referrals", 00:06:44.849 "nvmf_discovery_remove_referral", 00:06:44.849 "nvmf_discovery_add_referral", 00:06:44.849 "nvmf_subsystem_remove_listener", 00:06:44.849 "nvmf_subsystem_add_listener", 00:06:44.849 "nvmf_delete_subsystem", 00:06:44.849 "nvmf_create_subsystem", 00:06:44.849 "nvmf_get_subsystems", 00:06:44.849 "env_dpdk_get_mem_stats", 00:06:44.849 "nbd_get_disks", 00:06:44.849 "nbd_stop_disk", 00:06:44.849 "nbd_start_disk", 00:06:44.849 "ublk_recover_disk", 00:06:44.849 "ublk_get_disks", 00:06:44.849 "ublk_stop_disk", 00:06:44.849 "ublk_start_disk", 00:06:44.849 "ublk_destroy_target", 00:06:44.849 "ublk_create_target", 00:06:44.849 "virtio_blk_create_transport", 00:06:44.849 "virtio_blk_get_transports", 00:06:44.849 "vhost_controller_set_coalescing", 00:06:44.849 "vhost_get_controllers", 00:06:44.849 "vhost_delete_controller", 00:06:44.849 "vhost_create_blk_controller", 00:06:44.849 "vhost_scsi_controller_remove_target", 00:06:44.849 "vhost_scsi_controller_add_target", 00:06:44.849 "vhost_start_scsi_controller", 00:06:44.849 "vhost_create_scsi_controller", 00:06:44.849 "thread_set_cpumask", 00:06:44.849 "scheduler_set_options", 00:06:44.849 "framework_get_governor", 00:06:44.849 "framework_get_scheduler", 00:06:44.849 "framework_set_scheduler", 00:06:44.849 "framework_get_reactors", 00:06:44.849 "thread_get_io_channels", 00:06:44.849 "thread_get_pollers", 00:06:44.849 "thread_get_stats", 00:06:44.849 "framework_monitor_context_switch", 00:06:44.849 "spdk_kill_instance", 00:06:44.849 "log_enable_timestamps", 00:06:44.849 "log_get_flags", 00:06:44.849 "log_clear_flag", 00:06:44.849 "log_set_flag", 00:06:44.849 "log_get_level", 00:06:44.849 "log_set_level", 00:06:44.849 "log_get_print_level", 00:06:44.849 "log_set_print_level", 00:06:44.849 "framework_enable_cpumask_locks", 00:06:44.849 "framework_disable_cpumask_locks", 00:06:44.849 "framework_wait_init", 00:06:44.849 "framework_start_init", 00:06:44.849 "scsi_get_devices", 00:06:44.849 "bdev_get_histogram", 00:06:44.849 "bdev_enable_histogram", 00:06:44.849 "bdev_set_qos_limit", 00:06:44.849 "bdev_set_qd_sampling_period", 00:06:44.849 "bdev_get_bdevs", 00:06:44.849 "bdev_reset_iostat", 00:06:44.849 "bdev_get_iostat", 00:06:44.849 "bdev_examine", 00:06:44.849 "bdev_wait_for_examine", 00:06:44.849 "bdev_set_options", 00:06:44.849 "accel_get_stats", 00:06:44.849 "accel_set_options", 00:06:44.849 "accel_set_driver", 00:06:44.849 "accel_crypto_key_destroy", 00:06:44.849 "accel_crypto_keys_get", 00:06:44.849 "accel_crypto_key_create", 00:06:44.849 "accel_assign_opc", 00:06:44.849 "accel_get_module_info", 00:06:44.849 "accel_get_opc_assignments", 00:06:44.849 "vmd_rescan", 00:06:44.849 "vmd_remove_device", 00:06:44.849 "vmd_enable", 00:06:44.849 "sock_get_default_impl", 00:06:44.849 "sock_set_default_impl", 00:06:44.849 "sock_impl_set_options", 00:06:44.849 "sock_impl_get_options", 00:06:44.849 "iobuf_get_stats", 00:06:44.849 "iobuf_set_options", 00:06:44.849 "keyring_get_keys", 00:06:44.849 "framework_get_pci_devices", 00:06:44.849 "framework_get_config", 00:06:44.849 "framework_get_subsystems", 00:06:44.849 "fsdev_set_opts", 00:06:44.849 "fsdev_get_opts", 00:06:44.849 "trace_get_info", 00:06:44.849 "trace_get_tpoint_group_mask", 00:06:44.849 "trace_disable_tpoint_group", 00:06:44.849 "trace_enable_tpoint_group", 00:06:44.849 "trace_clear_tpoint_mask", 00:06:44.849 "trace_set_tpoint_mask", 00:06:44.849 "notify_get_notifications", 00:06:44.849 "notify_get_types", 00:06:44.849 "spdk_get_version", 00:06:44.849 "rpc_get_methods" 00:06:44.849 ] 00:06:44.849 16:42:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:44.849 16:42:08 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.849 16:42:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.849 16:42:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:44.849 16:42:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71919 00:06:44.849 16:42:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71919 ']' 00:06:44.849 16:42:08 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71919 00:06:44.849 16:42:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:44.849 16:42:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.849 16:42:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71919 00:06:45.108 killing process with pid 71919 00:06:45.108 16:42:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.108 16:42:08 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.108 16:42:08 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71919' 00:06:45.108 16:42:08 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71919 00:06:45.108 16:42:08 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71919 00:06:45.108 00:06:45.108 real 0m1.104s 00:06:45.108 user 0m1.895s 00:06:45.108 sys 0m0.352s 00:06:45.108 ************************************ 00:06:45.108 END TEST spdkcli_tcp 00:06:45.108 ************************************ 00:06:45.108 16:42:08 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.108 16:42:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.367 16:42:08 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:45.367 16:42:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.367 16:42:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.367 16:42:08 -- common/autotest_common.sh@10 -- # set +x 00:06:45.367 ************************************ 00:06:45.367 START TEST dpdk_mem_utility 00:06:45.367 ************************************ 00:06:45.367 16:42:08 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:45.367 * Looking for test storage... 00:06:45.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:45.367 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.368 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.368 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.368 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.368 16:42:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:45.368 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.368 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.368 --rc genhtml_branch_coverage=1 00:06:45.368 --rc genhtml_function_coverage=1 00:06:45.368 --rc genhtml_legend=1 00:06:45.368 --rc geninfo_all_blocks=1 00:06:45.368 --rc geninfo_unexecuted_blocks=1 00:06:45.368 00:06:45.368 ' 00:06:45.368 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.368 --rc genhtml_branch_coverage=1 00:06:45.368 --rc genhtml_function_coverage=1 00:06:45.368 --rc genhtml_legend=1 00:06:45.368 --rc geninfo_all_blocks=1 00:06:45.368 --rc geninfo_unexecuted_blocks=1 00:06:45.368 00:06:45.368 ' 00:06:45.368 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.368 --rc genhtml_branch_coverage=1 00:06:45.368 --rc genhtml_function_coverage=1 00:06:45.368 --rc genhtml_legend=1 00:06:45.368 --rc geninfo_all_blocks=1 00:06:45.368 --rc geninfo_unexecuted_blocks=1 00:06:45.368 00:06:45.368 ' 00:06:45.368 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.368 --rc genhtml_branch_coverage=1 00:06:45.368 --rc genhtml_function_coverage=1 00:06:45.368 --rc genhtml_legend=1 00:06:45.368 --rc geninfo_all_blocks=1 00:06:45.368 --rc geninfo_unexecuted_blocks=1 00:06:45.368 00:06:45.368 ' 00:06:45.368 16:42:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:45.368 16:42:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=72005 00:06:45.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.368 16:42:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 72005 00:06:45.368 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 72005 ']' 00:06:45.368 16:42:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:45.368 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.368 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.368 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.368 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.368 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:45.627 [2024-11-29 16:42:09.204452] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:45.627 [2024-11-29 16:42:09.204781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72005 ] 00:06:45.627 [2024-11-29 16:42:09.330952] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:45.627 [2024-11-29 16:42:09.357160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.627 [2024-11-29 16:42:09.376196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.627 [2024-11-29 16:42:09.409986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.886 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.886 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:45.886 16:42:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:45.886 16:42:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:45.886 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.886 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:45.886 { 00:06:45.886 "filename": "/tmp/spdk_mem_dump.txt" 00:06:45.886 } 00:06:45.886 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.886 16:42:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:45.886 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:45.886 1 heaps totaling size 818.000000 MiB 00:06:45.886 size: 818.000000 MiB heap id: 0 00:06:45.886 end heaps---------- 00:06:45.886 9 mempools totaling size 603.782043 MiB 00:06:45.886 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:45.886 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:45.886 size: 100.555481 MiB name: bdev_io_72005 00:06:45.886 size: 50.003479 MiB name: msgpool_72005 00:06:45.886 size: 36.509338 MiB name: fsdev_io_72005 00:06:45.886 size: 21.763794 MiB name: PDU_Pool 00:06:45.886 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:45.886 size: 4.133484 MiB name: evtpool_72005 00:06:45.886 size: 0.026123 MiB name: Session_Pool 00:06:45.886 end mempools------- 00:06:45.886 6 memzones totaling size 4.142822 MiB 00:06:45.886 size: 1.000366 MiB name: RG_ring_0_72005 00:06:45.886 size: 1.000366 MiB name: RG_ring_1_72005 00:06:45.886 size: 1.000366 MiB name: RG_ring_4_72005 00:06:45.886 size: 1.000366 MiB name: RG_ring_5_72005 00:06:45.886 size: 0.125366 MiB name: RG_ring_2_72005 00:06:45.886 size: 0.015991 MiB name: RG_ring_3_72005 00:06:45.886 end memzones------- 00:06:45.886 16:42:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:45.886 heap id: 0 total size: 818.000000 MiB number of busy elements: 328 number of free elements: 15 00:06:45.886 list of free elements. size: 10.941040 MiB 00:06:45.886 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:45.886 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:45.886 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:45.886 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:45.886 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:45.886 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:45.886 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:45.886 element at address: 0x200000200000 with size: 0.858093 MiB 00:06:45.886 element at address: 0x20001ae00000 with size: 0.565491 MiB 00:06:45.886 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:45.886 element at address: 0x200000c00000 with size: 0.486267 MiB 00:06:45.886 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:45.886 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:45.886 element at address: 0x200028200000 with size: 0.395752 MiB 00:06:45.886 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:45.886 list of standard malloc elements. size: 199.130066 MiB 00:06:45.886 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:45.886 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:45.886 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:45.886 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:45.886 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:45.886 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:45.886 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:45.886 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:45.886 element at address: 0x2000002fbcc0 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000003fdec0 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:45.886 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:45.887 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:45.887 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:46.147 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:46.147 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:46.147 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:46.147 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae90c40 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae90d00 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae90dc0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae90e80 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae90f40 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae91000 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae910c0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae91180 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae91240 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae91300 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae913c0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae91480 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:06:46.147 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:46.148 element at address: 0x200028265500 with size: 0.000183 MiB 00:06:46.148 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826c480 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826c540 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826c600 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826c780 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826c840 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826c900 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826d080 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826d140 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826d200 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826d380 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826d440 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826d500 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826d680 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826d740 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826d800 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826d980 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826da40 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826db00 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826de00 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826df80 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826e040 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826e100 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826e280 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826e340 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826e400 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826e580 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826e640 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826e700 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826e880 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826e940 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826f000 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826f180 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826f240 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826f300 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826f480 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826f540 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826f600 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826f780 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826f840 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826f900 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:06:46.148 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:06:46.149 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:06:46.149 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:06:46.149 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:46.149 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:46.149 list of memzone associated elements. size: 607.928894 MiB 00:06:46.149 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:46.149 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:46.149 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:46.149 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:46.149 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:46.149 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_72005_0 00:06:46.149 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:46.149 associated memzone info: size: 48.002930 MiB name: MP_msgpool_72005_0 00:06:46.149 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:46.149 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_72005_0 00:06:46.149 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:46.149 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:46.149 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:46.149 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:46.149 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:46.149 associated memzone info: size: 3.000122 MiB name: MP_evtpool_72005_0 00:06:46.149 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:46.149 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_72005 00:06:46.149 element at address: 0x2000002fbd80 with size: 1.008118 MiB 00:06:46.149 associated memzone info: size: 1.007996 MiB name: MP_evtpool_72005 00:06:46.149 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:46.149 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:46.149 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:46.149 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:46.149 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:46.149 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:46.149 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:46.149 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:46.149 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:46.149 associated memzone info: size: 1.000366 MiB name: RG_ring_0_72005 00:06:46.149 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:46.149 associated memzone info: size: 1.000366 MiB name: RG_ring_1_72005 00:06:46.149 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:46.149 associated memzone info: size: 1.000366 MiB name: RG_ring_4_72005 00:06:46.149 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:46.149 associated memzone info: size: 1.000366 MiB name: RG_ring_5_72005 00:06:46.149 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:46.149 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_72005 00:06:46.149 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:46.149 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_72005 00:06:46.149 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:46.149 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:46.149 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:46.149 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:46.149 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:46.149 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:46.149 element at address: 0x2000002dbac0 with size: 0.125488 MiB 00:06:46.149 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_72005 00:06:46.149 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:46.149 associated memzone info: size: 0.125366 MiB name: RG_ring_2_72005 00:06:46.149 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:46.149 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:46.149 element at address: 0x200028265680 with size: 0.023743 MiB 00:06:46.149 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:46.149 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:46.149 associated memzone info: size: 0.015991 MiB name: RG_ring_3_72005 00:06:46.149 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:06:46.149 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:46.149 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:46.149 associated memzone info: size: 0.000183 MiB name: MP_msgpool_72005 00:06:46.149 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:46.149 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_72005 00:06:46.149 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:46.149 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_72005 00:06:46.149 element at address: 0x20002826c280 with size: 0.000305 MiB 00:06:46.149 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:46.149 16:42:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:46.149 16:42:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 72005 00:06:46.149 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 72005 ']' 00:06:46.149 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 72005 00:06:46.149 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:46.149 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.149 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72005 00:06:46.149 killing process with pid 72005 00:06:46.149 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.149 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.149 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72005' 00:06:46.149 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 72005 00:06:46.149 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 72005 00:06:46.408 ************************************ 00:06:46.408 END TEST dpdk_mem_utility 00:06:46.408 ************************************ 00:06:46.408 00:06:46.408 real 0m1.009s 00:06:46.408 user 0m1.090s 00:06:46.408 sys 0m0.312s 00:06:46.408 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.408 16:42:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:46.408 16:42:09 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:46.408 16:42:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.408 16:42:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.408 16:42:09 -- common/autotest_common.sh@10 -- # set +x 00:06:46.408 ************************************ 00:06:46.408 START TEST event 00:06:46.408 ************************************ 00:06:46.408 16:42:10 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:46.408 * Looking for test storage... 00:06:46.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:46.408 16:42:10 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.408 16:42:10 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.408 16:42:10 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.408 16:42:10 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.408 16:42:10 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.408 16:42:10 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.408 16:42:10 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.408 16:42:10 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.408 16:42:10 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.408 16:42:10 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.408 16:42:10 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.408 16:42:10 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.408 16:42:10 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.408 16:42:10 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.408 16:42:10 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.408 16:42:10 event -- scripts/common.sh@344 -- # case "$op" in 00:06:46.408 16:42:10 event -- scripts/common.sh@345 -- # : 1 00:06:46.408 16:42:10 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.408 16:42:10 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.408 16:42:10 event -- scripts/common.sh@365 -- # decimal 1 00:06:46.408 16:42:10 event -- scripts/common.sh@353 -- # local d=1 00:06:46.408 16:42:10 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.408 16:42:10 event -- scripts/common.sh@355 -- # echo 1 00:06:46.408 16:42:10 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.409 16:42:10 event -- scripts/common.sh@366 -- # decimal 2 00:06:46.409 16:42:10 event -- scripts/common.sh@353 -- # local d=2 00:06:46.409 16:42:10 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.409 16:42:10 event -- scripts/common.sh@355 -- # echo 2 00:06:46.409 16:42:10 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.409 16:42:10 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.409 16:42:10 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.409 16:42:10 event -- scripts/common.sh@368 -- # return 0 00:06:46.409 16:42:10 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.409 16:42:10 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.409 --rc genhtml_branch_coverage=1 00:06:46.409 --rc genhtml_function_coverage=1 00:06:46.409 --rc genhtml_legend=1 00:06:46.409 --rc geninfo_all_blocks=1 00:06:46.409 --rc geninfo_unexecuted_blocks=1 00:06:46.409 00:06:46.409 ' 00:06:46.409 16:42:10 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.409 --rc genhtml_branch_coverage=1 00:06:46.409 --rc genhtml_function_coverage=1 00:06:46.409 --rc genhtml_legend=1 00:06:46.409 --rc geninfo_all_blocks=1 00:06:46.409 --rc geninfo_unexecuted_blocks=1 00:06:46.409 00:06:46.409 ' 00:06:46.409 16:42:10 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.409 --rc genhtml_branch_coverage=1 00:06:46.409 --rc genhtml_function_coverage=1 00:06:46.409 --rc genhtml_legend=1 00:06:46.409 --rc geninfo_all_blocks=1 00:06:46.409 --rc geninfo_unexecuted_blocks=1 00:06:46.409 00:06:46.409 ' 00:06:46.409 16:42:10 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.409 --rc genhtml_branch_coverage=1 00:06:46.409 --rc genhtml_function_coverage=1 00:06:46.409 --rc genhtml_legend=1 00:06:46.409 --rc geninfo_all_blocks=1 00:06:46.409 --rc geninfo_unexecuted_blocks=1 00:06:46.409 00:06:46.409 ' 00:06:46.409 16:42:10 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:46.409 16:42:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:46.409 16:42:10 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:46.409 16:42:10 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:46.409 16:42:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.409 16:42:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.691 ************************************ 00:06:46.691 START TEST event_perf 00:06:46.691 ************************************ 00:06:46.691 16:42:10 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:46.691 Running I/O for 1 seconds...[2024-11-29 16:42:10.222306] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:46.691 [2024-11-29 16:42:10.222549] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72077 ] 00:06:46.691 [2024-11-29 16:42:10.344842] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.691 [2024-11-29 16:42:10.377128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.691 [2024-11-29 16:42:10.405011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.691 [2024-11-29 16:42:10.405265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.691 Running I/O for 1 seconds...[2024-11-29 16:42:10.405259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.691 [2024-11-29 16:42:10.405121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.064 00:06:48.064 lcore 0: 199029 00:06:48.064 lcore 1: 199027 00:06:48.064 lcore 2: 199028 00:06:48.064 lcore 3: 199028 00:06:48.064 done. 00:06:48.064 ************************************ 00:06:48.064 END TEST event_perf 00:06:48.064 ************************************ 00:06:48.064 00:06:48.064 real 0m1.242s 00:06:48.064 user 0m4.076s 00:06:48.064 sys 0m0.045s 00:06:48.064 16:42:11 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.064 16:42:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.064 16:42:11 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:48.064 16:42:11 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:48.064 16:42:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.064 16:42:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.064 ************************************ 00:06:48.064 START TEST event_reactor 00:06:48.064 ************************************ 00:06:48.064 16:42:11 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:48.064 [2024-11-29 16:42:11.506623] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:48.064 [2024-11-29 16:42:11.506724] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72110 ] 00:06:48.064 [2024-11-29 16:42:11.622060] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.064 [2024-11-29 16:42:11.647956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.064 [2024-11-29 16:42:11.666647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.058 test_start 00:06:49.058 oneshot 00:06:49.058 tick 100 00:06:49.058 tick 100 00:06:49.058 tick 250 00:06:49.058 tick 100 00:06:49.058 tick 100 00:06:49.058 tick 100 00:06:49.058 tick 250 00:06:49.058 tick 500 00:06:49.058 tick 100 00:06:49.058 tick 100 00:06:49.058 tick 250 00:06:49.058 tick 100 00:06:49.058 tick 100 00:06:49.058 test_end 00:06:49.058 00:06:49.058 real 0m1.207s 00:06:49.058 user 0m1.071s 00:06:49.058 sys 0m0.032s 00:06:49.058 ************************************ 00:06:49.058 END TEST event_reactor 00:06:49.058 ************************************ 00:06:49.058 16:42:12 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.058 16:42:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:49.058 16:42:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:49.058 16:42:12 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:49.058 16:42:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.058 16:42:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.058 ************************************ 00:06:49.058 START TEST event_reactor_perf 00:06:49.058 ************************************ 00:06:49.058 16:42:12 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:49.058 [2024-11-29 16:42:12.770408] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:49.058 [2024-11-29 16:42:12.771238] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72145 ] 00:06:49.316 [2024-11-29 16:42:12.892639] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:49.316 [2024-11-29 16:42:12.915130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.316 [2024-11-29 16:42:12.934638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.252 test_start 00:06:50.252 test_end 00:06:50.252 Performance: 431774 events per second 00:06:50.252 00:06:50.252 real 0m1.215s 00:06:50.252 user 0m1.073s 00:06:50.252 sys 0m0.035s 00:06:50.252 16:42:13 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.252 16:42:13 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.252 ************************************ 00:06:50.252 END TEST event_reactor_perf 00:06:50.252 ************************************ 00:06:50.252 16:42:14 event -- event/event.sh@49 -- # uname -s 00:06:50.252 16:42:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:50.252 16:42:14 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:50.252 16:42:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.252 16:42:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.252 16:42:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.252 ************************************ 00:06:50.252 START TEST event_scheduler 00:06:50.252 ************************************ 00:06:50.252 16:42:14 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:50.511 * Looking for test storage... 00:06:50.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:50.511 16:42:14 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.511 16:42:14 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.511 16:42:14 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.511 16:42:14 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.511 16:42:14 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:50.511 16:42:14 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.511 16:42:14 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.511 --rc genhtml_branch_coverage=1 00:06:50.511 --rc genhtml_function_coverage=1 00:06:50.511 --rc genhtml_legend=1 00:06:50.511 --rc geninfo_all_blocks=1 00:06:50.511 --rc geninfo_unexecuted_blocks=1 00:06:50.511 00:06:50.511 ' 00:06:50.511 16:42:14 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.511 --rc genhtml_branch_coverage=1 00:06:50.511 --rc genhtml_function_coverage=1 00:06:50.511 --rc genhtml_legend=1 00:06:50.511 --rc geninfo_all_blocks=1 00:06:50.511 --rc geninfo_unexecuted_blocks=1 00:06:50.511 00:06:50.511 ' 00:06:50.512 16:42:14 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.512 --rc genhtml_branch_coverage=1 00:06:50.512 --rc genhtml_function_coverage=1 00:06:50.512 --rc genhtml_legend=1 00:06:50.512 --rc geninfo_all_blocks=1 00:06:50.512 --rc geninfo_unexecuted_blocks=1 00:06:50.512 00:06:50.512 ' 00:06:50.512 16:42:14 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.512 --rc genhtml_branch_coverage=1 00:06:50.512 --rc genhtml_function_coverage=1 00:06:50.512 --rc genhtml_legend=1 00:06:50.512 --rc geninfo_all_blocks=1 00:06:50.512 --rc geninfo_unexecuted_blocks=1 00:06:50.512 00:06:50.512 ' 00:06:50.512 16:42:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:50.512 16:42:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72215 00:06:50.512 16:42:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.512 16:42:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:50.512 16:42:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72215 00:06:50.512 16:42:14 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 72215 ']' 00:06:50.512 16:42:14 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.512 16:42:14 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.512 16:42:14 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.512 16:42:14 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.512 16:42:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.512 [2024-11-29 16:42:14.264943] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:50.512 [2024-11-29 16:42:14.265937] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72215 ] 00:06:50.771 [2024-11-29 16:42:14.393110] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.771 [2024-11-29 16:42:14.425237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.771 [2024-11-29 16:42:14.453303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.771 [2024-11-29 16:42:14.453451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.771 [2024-11-29 16:42:14.453575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.771 [2024-11-29 16:42:14.453583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.771 16:42:14 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.771 16:42:14 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:50.771 16:42:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:50.771 16:42:14 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.771 16:42:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.771 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:50.771 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:50.771 POWER: intel_pstate driver is not supported 00:06:50.771 POWER: cppc_cpufreq driver is not supported 00:06:50.771 POWER: amd-pstate driver is not supported 00:06:50.771 POWER: acpi-cpufreq driver is not supported 00:06:50.771 POWER: Unable to set Power Management Environment for lcore 0 00:06:50.771 [2024-11-29 16:42:14.528822] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:50.771 [2024-11-29 16:42:14.528970] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:50.771 [2024-11-29 16:42:14.529102] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:50.771 [2024-11-29 16:42:14.529238] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:50.771 [2024-11-29 16:42:14.529380] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:50.771 [2024-11-29 16:42:14.529494] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:50.771 16:42:14 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.771 16:42:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:50.771 16:42:14 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.771 16:42:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.030 [2024-11-29 16:42:14.569710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.030 [2024-11-29 16:42:14.588726] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:51.030 16:42:14 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.030 16:42:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:51.030 16:42:14 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.030 16:42:14 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.030 16:42:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.030 ************************************ 00:06:51.030 START TEST scheduler_create_thread 00:06:51.030 ************************************ 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.031 2 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.031 3 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.031 4 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.031 5 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.031 6 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.031 7 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.031 8 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.031 9 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.031 10 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.031 16:42:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.406 16:42:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.406 16:42:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:52.406 16:42:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:52.406 16:42:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.406 16:42:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.782 16:42:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.782 00:06:53.782 real 0m2.614s 00:06:53.782 user 0m0.014s 00:06:53.782 sys 0m0.004s 00:06:53.782 16:42:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.782 16:42:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.782 ************************************ 00:06:53.782 END TEST scheduler_create_thread 00:06:53.782 ************************************ 00:06:53.782 16:42:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:53.782 16:42:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72215 00:06:53.782 16:42:17 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 72215 ']' 00:06:53.782 16:42:17 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 72215 00:06:53.782 16:42:17 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:53.782 16:42:17 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.782 16:42:17 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72215 00:06:53.782 killing process with pid 72215 00:06:53.782 16:42:17 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:53.782 16:42:17 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:53.782 16:42:17 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72215' 00:06:53.782 16:42:17 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 72215 00:06:53.782 16:42:17 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 72215 00:06:54.042 [2024-11-29 16:42:17.696112] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:54.042 00:06:54.042 real 0m3.798s 00:06:54.042 user 0m5.670s 00:06:54.042 sys 0m0.297s 00:06:54.042 16:42:17 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.042 16:42:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.042 ************************************ 00:06:54.042 END TEST event_scheduler 00:06:54.042 ************************************ 00:06:54.317 16:42:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:54.317 16:42:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:54.317 16:42:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.317 16:42:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.317 16:42:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.317 ************************************ 00:06:54.317 START TEST app_repeat 00:06:54.317 ************************************ 00:06:54.317 16:42:17 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:54.317 16:42:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.317 16:42:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.317 16:42:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:54.317 16:42:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.317 16:42:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:54.317 16:42:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:54.317 16:42:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:54.317 16:42:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72301 00:06:54.317 16:42:17 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:54.317 16:42:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.317 Process app_repeat pid: 72301 00:06:54.317 16:42:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72301' 00:06:54.317 spdk_app_start Round 0 00:06:54.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.317 16:42:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:54.317 16:42:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:54.317 16:42:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72301 /var/tmp/spdk-nbd.sock 00:06:54.317 16:42:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72301 ']' 00:06:54.317 16:42:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.317 16:42:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.317 16:42:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.317 16:42:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.317 16:42:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.317 [2024-11-29 16:42:17.908667] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:06:54.317 [2024-11-29 16:42:17.908922] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72301 ] 00:06:54.317 [2024-11-29 16:42:18.033133] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.317 [2024-11-29 16:42:18.051336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.317 [2024-11-29 16:42:18.072140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.317 [2024-11-29 16:42:18.072151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.596 [2024-11-29 16:42:18.104499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.596 16:42:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.596 16:42:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:54.596 16:42:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.870 Malloc0 00:06:54.870 16:42:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.129 Malloc1 00:06:55.129 16:42:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.129 16:42:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.129 16:42:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.129 16:42:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:55.129 16:42:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.129 16:42:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:55.130 16:42:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.130 16:42:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.130 16:42:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.130 16:42:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.130 16:42:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.130 16:42:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:55.130 16:42:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:55.130 16:42:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:55.130 16:42:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.130 16:42:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:55.389 /dev/nbd0 00:06:55.389 16:42:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:55.389 16:42:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:55.389 16:42:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:55.389 16:42:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:55.389 16:42:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.389 16:42:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.389 16:42:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:55.389 16:42:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:55.389 16:42:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.389 16:42:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.389 16:42:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.389 1+0 records in 00:06:55.389 1+0 records out 00:06:55.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682045 s, 6.0 MB/s 00:06:55.389 16:42:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:55.389 16:42:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:55.389 16:42:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:55.389 16:42:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.389 16:42:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:55.389 16:42:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.389 16:42:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.389 16:42:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:55.647 /dev/nbd1 00:06:55.647 16:42:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:55.647 16:42:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:55.647 16:42:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:55.647 16:42:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:55.647 16:42:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.647 16:42:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.647 16:42:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:55.647 16:42:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:55.647 16:42:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.647 16:42:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.647 16:42:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.647 1+0 records in 00:06:55.647 1+0 records out 00:06:55.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219156 s, 18.7 MB/s 00:06:55.647 16:42:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:55.647 16:42:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:55.647 16:42:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:55.647 16:42:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.647 16:42:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:55.647 16:42:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.647 16:42:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.647 16:42:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.647 16:42:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.647 16:42:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.905 16:42:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:55.905 { 00:06:55.905 "nbd_device": "/dev/nbd0", 00:06:55.905 "bdev_name": "Malloc0" 00:06:55.905 }, 00:06:55.905 { 00:06:55.905 "nbd_device": "/dev/nbd1", 00:06:55.905 "bdev_name": "Malloc1" 00:06:55.905 } 00:06:55.905 ]' 00:06:55.905 16:42:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:55.905 { 00:06:55.905 "nbd_device": "/dev/nbd0", 00:06:55.905 "bdev_name": "Malloc0" 00:06:55.905 }, 00:06:55.905 { 00:06:55.905 "nbd_device": "/dev/nbd1", 00:06:55.905 "bdev_name": "Malloc1" 00:06:55.905 } 00:06:55.905 ]' 00:06:55.905 16:42:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.905 16:42:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:55.905 /dev/nbd1' 00:06:55.905 16:42:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:55.905 /dev/nbd1' 00:06:55.905 16:42:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.905 16:42:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:55.905 16:42:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:55.905 16:42:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:55.905 16:42:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:55.905 16:42:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:55.905 16:42:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.905 16:42:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.905 16:42:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:55.906 256+0 records in 00:06:55.906 256+0 records out 00:06:55.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107981 s, 97.1 MB/s 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:55.906 256+0 records in 00:06:55.906 256+0 records out 00:06:55.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256437 s, 40.9 MB/s 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:55.906 256+0 records in 00:06:55.906 256+0 records out 00:06:55.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250551 s, 41.9 MB/s 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.906 16:42:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:56.164 16:42:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.164 16:42:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:56.164 16:42:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.164 16:42:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.164 16:42:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.164 16:42:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:56.164 16:42:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.164 16:42:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.422 16:42:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.422 16:42:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.422 16:42:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.422 16:42:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.422 16:42:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.422 16:42:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.422 16:42:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.422 16:42:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.422 16:42:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.422 16:42:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:56.681 16:42:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:56.681 16:42:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:56.681 16:42:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:56.681 16:42:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.681 16:42:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.681 16:42:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:56.681 16:42:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.681 16:42:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.681 16:42:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.681 16:42:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.681 16:42:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.939 16:42:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:56.939 16:42:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.939 16:42:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:56.939 16:42:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:56.939 16:42:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.939 16:42:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:56.939 16:42:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:56.939 16:42:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:56.939 16:42:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:56.939 16:42:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:56.939 16:42:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:56.939 16:42:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:56.939 16:42:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:57.198 16:42:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:57.457 [2024-11-29 16:42:21.073427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.457 [2024-11-29 16:42:21.096987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.457 [2024-11-29 16:42:21.097001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.457 [2024-11-29 16:42:21.127502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.457 [2024-11-29 16:42:21.127594] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:57.457 [2024-11-29 16:42:21.127606] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:00.745 spdk_app_start Round 1 00:07:00.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.745 16:42:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:00.745 16:42:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:00.745 16:42:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72301 /var/tmp/spdk-nbd.sock 00:07:00.745 16:42:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72301 ']' 00:07:00.745 16:42:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.745 16:42:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.745 16:42:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.745 16:42:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.745 16:42:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.745 16:42:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.745 16:42:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:00.745 16:42:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:00.745 Malloc0 00:07:00.745 16:42:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.004 Malloc1 00:07:01.004 16:42:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.004 16:42:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:01.263 /dev/nbd0 00:07:01.263 16:42:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.263 16:42:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.263 16:42:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:01.263 16:42:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:01.263 16:42:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.263 16:42:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.263 16:42:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:01.263 16:42:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:01.263 16:42:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.263 16:42:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.263 16:42:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.263 1+0 records in 00:07:01.263 1+0 records out 00:07:01.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311865 s, 13.1 MB/s 00:07:01.263 16:42:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.263 16:42:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:01.263 16:42:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.263 16:42:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.263 16:42:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:01.263 16:42:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.263 16:42:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.264 16:42:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:01.840 /dev/nbd1 00:07:01.840 16:42:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:01.840 16:42:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:01.840 16:42:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:01.840 16:42:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:01.840 16:42:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.840 16:42:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.840 16:42:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:01.840 16:42:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:01.840 16:42:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.840 16:42:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.840 16:42:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.840 1+0 records in 00:07:01.840 1+0 records out 00:07:01.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284011 s, 14.4 MB/s 00:07:01.840 16:42:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.840 16:42:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:01.840 16:42:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.840 16:42:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.840 16:42:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:01.840 16:42:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.840 16:42:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.840 16:42:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.840 16:42:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.840 16:42:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.840 16:42:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.840 { 00:07:01.840 "nbd_device": "/dev/nbd0", 00:07:01.840 "bdev_name": "Malloc0" 00:07:01.840 }, 00:07:01.840 { 00:07:01.840 "nbd_device": "/dev/nbd1", 00:07:01.840 "bdev_name": "Malloc1" 00:07:01.840 } 00:07:01.840 ]' 00:07:01.840 16:42:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.840 { 00:07:01.840 "nbd_device": "/dev/nbd0", 00:07:01.840 "bdev_name": "Malloc0" 00:07:01.840 }, 00:07:01.840 { 00:07:01.840 "nbd_device": "/dev/nbd1", 00:07:01.840 "bdev_name": "Malloc1" 00:07:01.840 } 00:07:01.840 ]' 00:07:01.840 16:42:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.099 /dev/nbd1' 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.099 /dev/nbd1' 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:02.099 256+0 records in 00:07:02.099 256+0 records out 00:07:02.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00641831 s, 163 MB/s 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.099 256+0 records in 00:07:02.099 256+0 records out 00:07:02.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220034 s, 47.7 MB/s 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.099 256+0 records in 00:07:02.099 256+0 records out 00:07:02.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251278 s, 41.7 MB/s 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.099 16:42:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.358 16:42:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.358 16:42:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.358 16:42:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.358 16:42:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.358 16:42:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.358 16:42:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.358 16:42:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.358 16:42:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.358 16:42:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.358 16:42:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.617 16:42:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.617 16:42:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.617 16:42:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.617 16:42:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.617 16:42:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.617 16:42:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.617 16:42:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.617 16:42:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.617 16:42:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.617 16:42:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.617 16:42:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.876 16:42:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.876 16:42:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.876 16:42:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.135 16:42:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.135 16:42:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.135 16:42:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.135 16:42:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:03.135 16:42:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.135 16:42:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.135 16:42:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.135 16:42:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.135 16:42:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.135 16:42:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:03.135 16:42:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:03.394 [2024-11-29 16:42:26.999814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.394 [2024-11-29 16:42:27.018344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.394 [2024-11-29 16:42:27.018354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.394 [2024-11-29 16:42:27.048282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.394 [2024-11-29 16:42:27.048418] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:03.394 [2024-11-29 16:42:27.048433] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.680 spdk_app_start Round 2 00:07:06.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.680 16:42:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:06.680 16:42:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:06.680 16:42:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72301 /var/tmp/spdk-nbd.sock 00:07:06.680 16:42:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72301 ']' 00:07:06.680 16:42:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.680 16:42:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.680 16:42:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.680 16:42:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.680 16:42:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.680 16:42:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.680 16:42:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:06.680 16:42:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.680 Malloc0 00:07:06.939 16:42:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.198 Malloc1 00:07:07.198 16:42:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.198 16:42:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:07.457 /dev/nbd0 00:07:07.457 16:42:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.457 16:42:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.457 16:42:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:07.457 16:42:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:07.457 16:42:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.457 16:42:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.457 16:42:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:07.457 16:42:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:07.458 16:42:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.458 16:42:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.458 16:42:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.458 1+0 records in 00:07:07.458 1+0 records out 00:07:07.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577864 s, 7.1 MB/s 00:07:07.458 16:42:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.458 16:42:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:07.458 16:42:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.458 16:42:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.458 16:42:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:07.458 16:42:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.458 16:42:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.458 16:42:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:07.716 /dev/nbd1 00:07:07.716 16:42:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:07.716 16:42:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:07.716 16:42:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:07.716 16:42:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:07.716 16:42:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.716 16:42:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.716 16:42:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:07.717 16:42:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:07.717 16:42:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.717 16:42:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.717 16:42:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.717 1+0 records in 00:07:07.717 1+0 records out 00:07:07.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316746 s, 12.9 MB/s 00:07:07.717 16:42:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.717 16:42:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:07.717 16:42:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.717 16:42:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.717 16:42:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:07.717 16:42:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.717 16:42:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.717 16:42:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.717 16:42:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.717 16:42:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.975 16:42:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.975 { 00:07:07.975 "nbd_device": "/dev/nbd0", 00:07:07.975 "bdev_name": "Malloc0" 00:07:07.975 }, 00:07:07.975 { 00:07:07.975 "nbd_device": "/dev/nbd1", 00:07:07.975 "bdev_name": "Malloc1" 00:07:07.975 } 00:07:07.975 ]' 00:07:07.975 16:42:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.975 { 00:07:07.975 "nbd_device": "/dev/nbd0", 00:07:07.975 "bdev_name": "Malloc0" 00:07:07.975 }, 00:07:07.975 { 00:07:07.975 "nbd_device": "/dev/nbd1", 00:07:07.975 "bdev_name": "Malloc1" 00:07:07.975 } 00:07:07.975 ]' 00:07:07.976 16:42:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.976 16:42:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:07.976 /dev/nbd1' 00:07:07.976 16:42:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:07.976 /dev/nbd1' 00:07:07.976 16:42:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.234 16:42:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:08.234 16:42:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:08.234 16:42:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:08.234 16:42:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:08.234 16:42:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:08.234 16:42:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.234 16:42:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.234 16:42:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:08.234 16:42:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:08.234 16:42:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:08.234 16:42:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:08.234 256+0 records in 00:07:08.234 256+0 records out 00:07:08.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00780491 s, 134 MB/s 00:07:08.234 16:42:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.234 16:42:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:08.234 256+0 records in 00:07:08.234 256+0 records out 00:07:08.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220715 s, 47.5 MB/s 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:08.235 256+0 records in 00:07:08.235 256+0 records out 00:07:08.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279984 s, 37.5 MB/s 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.235 16:42:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:08.493 16:42:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.493 16:42:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.493 16:42:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.493 16:42:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.493 16:42:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.493 16:42:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.493 16:42:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.493 16:42:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.493 16:42:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.493 16:42:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:08.752 16:42:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:08.752 16:42:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:08.752 16:42:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:08.752 16:42:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.752 16:42:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.752 16:42:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:08.752 16:42:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.752 16:42:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.752 16:42:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.752 16:42:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.752 16:42:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.011 16:42:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:09.011 16:42:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:09.011 16:42:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.269 16:42:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.269 16:42:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.269 16:42:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.269 16:42:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:09.269 16:42:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.269 16:42:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.269 16:42:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:09.269 16:42:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:09.269 16:42:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:09.269 16:42:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:09.528 16:42:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:09.528 [2024-11-29 16:42:33.205519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.528 [2024-11-29 16:42:33.224101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.528 [2024-11-29 16:42:33.224113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.528 [2024-11-29 16:42:33.253022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.528 [2024-11-29 16:42:33.253131] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:09.528 [2024-11-29 16:42:33.253144] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:12.821 16:42:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72301 /var/tmp/spdk-nbd.sock 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72301 ']' 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:12.821 16:42:36 event.app_repeat -- event/event.sh@39 -- # killprocess 72301 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 72301 ']' 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 72301 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72301 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72301' 00:07:12.821 killing process with pid 72301 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@973 -- # kill 72301 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@978 -- # wait 72301 00:07:12.821 spdk_app_start is called in Round 0. 00:07:12.821 Shutdown signal received, stop current app iteration 00:07:12.821 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:07:12.821 spdk_app_start is called in Round 1. 00:07:12.821 Shutdown signal received, stop current app iteration 00:07:12.821 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:07:12.821 spdk_app_start is called in Round 2. 00:07:12.821 Shutdown signal received, stop current app iteration 00:07:12.821 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 reinitialization... 00:07:12.821 spdk_app_start is called in Round 3. 00:07:12.821 Shutdown signal received, stop current app iteration 00:07:12.821 16:42:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:12.821 16:42:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:12.821 00:07:12.821 real 0m18.664s 00:07:12.821 user 0m43.151s 00:07:12.821 sys 0m2.465s 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.821 16:42:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:12.821 ************************************ 00:07:12.821 END TEST app_repeat 00:07:12.821 ************************************ 00:07:12.821 16:42:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:12.821 16:42:36 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:12.821 16:42:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.821 16:42:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.821 16:42:36 event -- common/autotest_common.sh@10 -- # set +x 00:07:12.821 ************************************ 00:07:12.821 START TEST cpu_locks 00:07:12.821 ************************************ 00:07:12.821 16:42:36 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:13.080 * Looking for test storage... 00:07:13.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:13.080 16:42:36 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.080 16:42:36 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.080 16:42:36 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.080 16:42:36 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.080 16:42:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:13.080 16:42:36 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.080 16:42:36 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.080 --rc genhtml_branch_coverage=1 00:07:13.080 --rc genhtml_function_coverage=1 00:07:13.080 --rc genhtml_legend=1 00:07:13.080 --rc geninfo_all_blocks=1 00:07:13.080 --rc geninfo_unexecuted_blocks=1 00:07:13.080 00:07:13.080 ' 00:07:13.080 16:42:36 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.080 --rc genhtml_branch_coverage=1 00:07:13.080 --rc genhtml_function_coverage=1 00:07:13.080 --rc genhtml_legend=1 00:07:13.080 --rc geninfo_all_blocks=1 00:07:13.080 --rc geninfo_unexecuted_blocks=1 00:07:13.080 00:07:13.080 ' 00:07:13.080 16:42:36 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.080 --rc genhtml_branch_coverage=1 00:07:13.080 --rc genhtml_function_coverage=1 00:07:13.080 --rc genhtml_legend=1 00:07:13.080 --rc geninfo_all_blocks=1 00:07:13.080 --rc geninfo_unexecuted_blocks=1 00:07:13.080 00:07:13.080 ' 00:07:13.080 16:42:36 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.080 --rc genhtml_branch_coverage=1 00:07:13.080 --rc genhtml_function_coverage=1 00:07:13.080 --rc genhtml_legend=1 00:07:13.080 --rc geninfo_all_blocks=1 00:07:13.080 --rc geninfo_unexecuted_blocks=1 00:07:13.080 00:07:13.080 ' 00:07:13.080 16:42:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:13.080 16:42:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:13.080 16:42:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:13.080 16:42:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:13.080 16:42:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.080 16:42:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.080 16:42:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.080 ************************************ 00:07:13.080 START TEST default_locks 00:07:13.080 ************************************ 00:07:13.080 16:42:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:13.080 16:42:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72746 00:07:13.080 16:42:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72746 00:07:13.080 16:42:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72746 ']' 00:07:13.080 16:42:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.080 16:42:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.080 16:42:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.081 16:42:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.081 16:42:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.081 16:42:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.081 [2024-11-29 16:42:36.834800] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:13.081 [2024-11-29 16:42:36.835477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72746 ] 00:07:13.340 [2024-11-29 16:42:36.962030] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:13.340 [2024-11-29 16:42:36.995292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.340 [2024-11-29 16:42:37.020273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.340 [2024-11-29 16:42:37.065303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.277 16:42:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.277 16:42:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:14.277 16:42:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72746 00:07:14.277 16:42:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72746 00:07:14.277 16:42:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.277 16:42:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72746 00:07:14.277 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72746 ']' 00:07:14.277 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72746 00:07:14.277 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:14.277 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.277 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72746 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.536 killing process with pid 72746 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72746' 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72746 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72746 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72746 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72746 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72746 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72746 ']' 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.536 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72746) - No such process 00:07:14.536 ERROR: process (pid: 72746) is no longer running 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:14.536 00:07:14.536 real 0m1.547s 00:07:14.536 user 0m1.759s 00:07:14.536 sys 0m0.395s 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.536 16:42:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.536 ************************************ 00:07:14.536 END TEST default_locks 00:07:14.536 ************************************ 00:07:14.796 16:42:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:14.796 16:42:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.796 16:42:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.796 16:42:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.796 ************************************ 00:07:14.796 START TEST default_locks_via_rpc 00:07:14.796 ************************************ 00:07:14.796 16:42:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:14.796 16:42:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72787 00:07:14.796 16:42:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72787 00:07:14.796 16:42:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.796 16:42:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72787 ']' 00:07:14.796 16:42:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.796 16:42:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.796 16:42:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.796 16:42:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.796 16:42:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.796 [2024-11-29 16:42:38.432211] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:14.796 [2024-11-29 16:42:38.432347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72787 ] 00:07:14.796 [2024-11-29 16:42:38.560817] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.796 [2024-11-29 16:42:38.578908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.054 [2024-11-29 16:42:38.599444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.054 [2024-11-29 16:42:38.638609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72787 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72787 00:07:15.622 16:42:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.191 16:42:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72787 00:07:16.191 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72787 ']' 00:07:16.191 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72787 00:07:16.191 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:16.191 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.191 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72787 00:07:16.191 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.191 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.191 killing process with pid 72787 00:07:16.191 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72787' 00:07:16.191 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72787 00:07:16.191 16:42:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72787 00:07:16.450 00:07:16.450 real 0m1.752s 00:07:16.450 user 0m2.007s 00:07:16.450 sys 0m0.467s 00:07:16.450 16:42:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.450 16:42:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.450 ************************************ 00:07:16.450 END TEST default_locks_via_rpc 00:07:16.450 ************************************ 00:07:16.450 16:42:40 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:16.450 16:42:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.450 16:42:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.450 16:42:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.450 ************************************ 00:07:16.450 START TEST non_locking_app_on_locked_coremask 00:07:16.450 ************************************ 00:07:16.450 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:16.450 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72838 00:07:16.450 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72838 /var/tmp/spdk.sock 00:07:16.450 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72838 ']' 00:07:16.450 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.450 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.450 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.450 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.450 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.450 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.450 [2024-11-29 16:42:40.239364] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:16.450 [2024-11-29 16:42:40.239473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72838 ] 00:07:16.710 [2024-11-29 16:42:40.365037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:16.710 [2024-11-29 16:42:40.391802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.710 [2024-11-29 16:42:40.411340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.710 [2024-11-29 16:42:40.450274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.969 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.969 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:16.969 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72847 00:07:16.969 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72847 /var/tmp/spdk2.sock 00:07:16.969 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:16.969 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72847 ']' 00:07:16.969 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.969 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.969 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.969 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.969 16:42:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.969 [2024-11-29 16:42:40.645485] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:16.969 [2024-11-29 16:42:40.645588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72847 ] 00:07:17.229 [2024-11-29 16:42:40.771256] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.229 [2024-11-29 16:42:40.805627] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:17.229 [2024-11-29 16:42:40.805671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.229 [2024-11-29 16:42:40.852639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.229 [2024-11-29 16:42:40.928943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.165 16:42:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.165 16:42:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:18.165 16:42:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72838 00:07:18.165 16:42:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72838 00:07:18.165 16:42:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.733 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72838 00:07:18.733 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72838 ']' 00:07:18.733 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72838 00:07:18.733 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:18.733 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.733 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72838 00:07:18.733 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.733 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.733 killing process with pid 72838 00:07:18.733 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72838' 00:07:18.733 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72838 00:07:18.733 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72838 00:07:18.993 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72847 00:07:18.993 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72847 ']' 00:07:18.993 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72847 00:07:19.252 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:19.252 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.252 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72847 00:07:19.252 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.252 killing process with pid 72847 00:07:19.252 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.252 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72847' 00:07:19.252 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72847 00:07:19.252 16:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72847 00:07:19.252 00:07:19.252 real 0m2.870s 00:07:19.252 user 0m3.393s 00:07:19.252 sys 0m0.769s 00:07:19.252 16:42:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.252 16:42:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.252 ************************************ 00:07:19.252 END TEST non_locking_app_on_locked_coremask 00:07:19.252 ************************************ 00:07:19.511 16:42:43 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:19.511 16:42:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.511 16:42:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.511 16:42:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.511 ************************************ 00:07:19.511 START TEST locking_app_on_unlocked_coremask 00:07:19.511 ************************************ 00:07:19.511 16:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:19.511 16:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72909 00:07:19.511 16:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72909 /var/tmp/spdk.sock 00:07:19.511 16:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72909 ']' 00:07:19.511 16:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:19.511 16:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.511 16:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.511 16:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.511 16:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.511 16:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.511 [2024-11-29 16:42:43.160805] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:19.511 [2024-11-29 16:42:43.160904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72909 ] 00:07:19.511 [2024-11-29 16:42:43.288795] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:19.770 [2024-11-29 16:42:43.308185] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:19.770 [2024-11-29 16:42:43.308211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.770 [2024-11-29 16:42:43.327877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.770 [2024-11-29 16:42:43.363794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.338 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.338 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:20.338 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72925 00:07:20.338 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:20.338 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72925 /var/tmp/spdk2.sock 00:07:20.338 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72925 ']' 00:07:20.338 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.338 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.338 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.338 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.338 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.597 [2024-11-29 16:42:44.157917] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:20.597 [2024-11-29 16:42:44.158000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72925 ] 00:07:20.597 [2024-11-29 16:42:44.276918] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.597 [2024-11-29 16:42:44.316134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.597 [2024-11-29 16:42:44.356641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.856 [2024-11-29 16:42:44.428470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.856 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.856 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:20.856 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72925 00:07:20.856 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72925 00:07:20.856 16:42:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.790 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72909 00:07:21.790 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72909 ']' 00:07:21.790 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72909 00:07:21.790 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:21.790 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.790 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72909 00:07:21.790 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.790 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.790 killing process with pid 72909 00:07:21.791 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72909' 00:07:21.791 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72909 00:07:21.791 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72909 00:07:22.359 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72925 00:07:22.359 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72925 ']' 00:07:22.359 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72925 00:07:22.359 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:22.359 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.359 16:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72925 00:07:22.359 16:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.359 16:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.359 killing process with pid 72925 00:07:22.359 16:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72925' 00:07:22.359 16:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72925 00:07:22.359 16:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72925 00:07:22.618 00:07:22.618 real 0m3.159s 00:07:22.618 user 0m3.644s 00:07:22.618 sys 0m0.896s 00:07:22.618 16:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.618 16:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.618 ************************************ 00:07:22.618 END TEST locking_app_on_unlocked_coremask 00:07:22.618 ************************************ 00:07:22.618 16:42:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:22.618 16:42:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.618 16:42:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.618 16:42:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.618 ************************************ 00:07:22.618 START TEST locking_app_on_locked_coremask 00:07:22.618 ************************************ 00:07:22.618 16:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:22.618 16:42:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72979 00:07:22.618 16:42:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72979 /var/tmp/spdk.sock 00:07:22.618 16:42:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.618 16:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72979 ']' 00:07:22.618 16:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.618 16:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.618 16:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.618 16:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.618 16:42:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.618 [2024-11-29 16:42:46.377301] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:22.618 [2024-11-29 16:42:46.377434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72979 ] 00:07:22.878 [2024-11-29 16:42:46.504907] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:22.878 [2024-11-29 16:42:46.525489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.878 [2024-11-29 16:42:46.546874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.878 [2024-11-29 16:42:46.586166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.952 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.952 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:23.952 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:23.952 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72995 00:07:23.952 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72995 /var/tmp/spdk2.sock 00:07:23.953 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:23.953 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72995 /var/tmp/spdk2.sock 00:07:23.953 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:23.953 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.953 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:23.953 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.953 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72995 /var/tmp/spdk2.sock 00:07:23.953 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72995 ']' 00:07:23.953 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.953 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.953 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.953 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.953 16:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.953 [2024-11-29 16:42:47.395003] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:23.953 [2024-11-29 16:42:47.395114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72995 ] 00:07:23.953 [2024-11-29 16:42:47.514202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.953 [2024-11-29 16:42:47.551710] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72979 has claimed it. 00:07:23.953 [2024-11-29 16:42:47.551792] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:24.520 ERROR: process (pid: 72995) is no longer running 00:07:24.520 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72995) - No such process 00:07:24.520 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.520 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:24.520 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:24.520 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.520 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.520 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.520 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72979 00:07:24.520 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72979 00:07:24.520 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.780 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72979 00:07:24.780 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72979 ']' 00:07:24.780 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72979 00:07:24.780 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:24.780 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.780 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72979 00:07:24.780 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.780 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.780 killing process with pid 72979 00:07:24.780 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72979' 00:07:24.780 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72979 00:07:24.780 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72979 00:07:25.039 00:07:25.039 real 0m2.399s 00:07:25.039 user 0m2.892s 00:07:25.039 sys 0m0.490s 00:07:25.039 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.039 16:42:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.039 ************************************ 00:07:25.039 END TEST locking_app_on_locked_coremask 00:07:25.039 ************************************ 00:07:25.039 16:42:48 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:25.039 16:42:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.039 16:42:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.039 16:42:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.039 ************************************ 00:07:25.039 START TEST locking_overlapped_coremask 00:07:25.039 ************************************ 00:07:25.039 16:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:25.039 16:42:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=73041 00:07:25.039 16:42:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 73041 /var/tmp/spdk.sock 00:07:25.039 16:42:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:25.039 16:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 73041 ']' 00:07:25.039 16:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.039 16:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.040 16:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.040 16:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.040 16:42:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.040 [2024-11-29 16:42:48.824536] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:25.040 [2024-11-29 16:42:48.824657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73041 ] 00:07:25.299 [2024-11-29 16:42:48.953183] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.299 [2024-11-29 16:42:48.971506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.299 [2024-11-29 16:42:48.993115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.299 [2024-11-29 16:42:48.992991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.299 [2024-11-29 16:42:48.993109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.299 [2024-11-29 16:42:49.030602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=73059 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 73059 /var/tmp/spdk2.sock 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 73059 /var/tmp/spdk2.sock 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:26.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 73059 /var/tmp/spdk2.sock 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 73059 ']' 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.236 16:42:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.236 [2024-11-29 16:42:49.857363] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:26.237 [2024-11-29 16:42:49.857463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73059 ] 00:07:26.237 [2024-11-29 16:42:49.988967] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.495 [2024-11-29 16:42:50.031051] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73041 has claimed it. 00:07:26.495 [2024-11-29 16:42:50.031119] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:27.062 ERROR: process (pid: 73059) is no longer running 00:07:27.062 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (73059) - No such process 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 73041 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 73041 ']' 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 73041 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73041 00:07:27.062 killing process with pid 73041 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73041' 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 73041 00:07:27.062 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 73041 00:07:27.321 00:07:27.321 real 0m2.103s 00:07:27.321 user 0m6.209s 00:07:27.321 sys 0m0.339s 00:07:27.321 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.321 16:42:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.321 ************************************ 00:07:27.321 END TEST locking_overlapped_coremask 00:07:27.321 ************************************ 00:07:27.321 16:42:50 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:27.321 16:42:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.321 16:42:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.321 16:42:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.321 ************************************ 00:07:27.321 START TEST locking_overlapped_coremask_via_rpc 00:07:27.321 ************************************ 00:07:27.321 16:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:27.321 16:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=73099 00:07:27.321 16:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 73099 /var/tmp/spdk.sock 00:07:27.321 16:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:27.321 16:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73099 ']' 00:07:27.321 16:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.321 16:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.321 16:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.321 16:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.321 16:42:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.321 [2024-11-29 16:42:50.984305] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:27.321 [2024-11-29 16:42:50.984438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73099 ] 00:07:27.579 [2024-11-29 16:42:51.114206] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.579 [2024-11-29 16:42:51.141204] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:27.579 [2024-11-29 16:42:51.141241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.579 [2024-11-29 16:42:51.165126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.579 [2024-11-29 16:42:51.165233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.579 [2024-11-29 16:42:51.165239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.579 [2024-11-29 16:42:51.204296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.148 16:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.148 16:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:28.148 16:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=73117 00:07:28.148 16:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 73117 /var/tmp/spdk2.sock 00:07:28.148 16:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73117 ']' 00:07:28.148 16:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.148 16:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.148 16:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.148 16:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.148 16:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.148 16:42:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:28.407 [2024-11-29 16:42:51.997049] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:28.407 [2024-11-29 16:42:51.997141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73117 ] 00:07:28.407 [2024-11-29 16:42:52.125193] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:28.407 [2024-11-29 16:42:52.163045] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:28.407 [2024-11-29 16:42:52.163078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.665 [2024-11-29 16:42:52.208726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.665 [2024-11-29 16:42:52.212514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:28.665 [2024-11-29 16:42:52.212517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.665 [2024-11-29 16:42:52.283277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.232 [2024-11-29 16:42:52.981520] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73099 has claimed it. 00:07:29.232 request: 00:07:29.232 { 00:07:29.232 "method": "framework_enable_cpumask_locks", 00:07:29.232 "req_id": 1 00:07:29.232 } 00:07:29.232 Got JSON-RPC error response 00:07:29.232 response: 00:07:29.232 { 00:07:29.232 "code": -32603, 00:07:29.232 "message": "Failed to claim CPU core: 2" 00:07:29.232 } 00:07:29.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 73099 /var/tmp/spdk.sock 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73099 ']' 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.232 16:42:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.491 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.491 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:29.491 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 73117 /var/tmp/spdk2.sock 00:07:29.491 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73117 ']' 00:07:29.491 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.491 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.491 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.491 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.491 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.058 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.058 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:30.058 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:30.058 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:30.058 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:30.058 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:30.058 00:07:30.058 real 0m2.679s 00:07:30.058 user 0m1.433s 00:07:30.058 sys 0m0.185s 00:07:30.059 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.059 16:42:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.059 ************************************ 00:07:30.059 END TEST locking_overlapped_coremask_via_rpc 00:07:30.059 ************************************ 00:07:30.059 16:42:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:30.059 16:42:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73099 ]] 00:07:30.059 16:42:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73099 00:07:30.059 16:42:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73099 ']' 00:07:30.059 16:42:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73099 00:07:30.059 16:42:53 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:30.059 16:42:53 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.059 16:42:53 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73099 00:07:30.059 killing process with pid 73099 00:07:30.059 16:42:53 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.059 16:42:53 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.059 16:42:53 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73099' 00:07:30.059 16:42:53 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 73099 00:07:30.059 16:42:53 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 73099 00:07:30.317 16:42:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73117 ]] 00:07:30.317 16:42:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73117 00:07:30.317 16:42:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73117 ']' 00:07:30.317 16:42:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73117 00:07:30.317 16:42:53 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:30.317 16:42:53 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.317 16:42:53 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73117 00:07:30.317 killing process with pid 73117 00:07:30.317 16:42:53 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:30.317 16:42:53 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:30.317 16:42:53 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73117' 00:07:30.317 16:42:53 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 73117 00:07:30.317 16:42:53 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 73117 00:07:30.576 16:42:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:30.576 16:42:54 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:30.576 16:42:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73099 ]] 00:07:30.576 16:42:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73099 00:07:30.576 16:42:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73099 ']' 00:07:30.576 16:42:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73099 00:07:30.576 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73099) - No such process 00:07:30.576 Process with pid 73099 is not found 00:07:30.576 16:42:54 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 73099 is not found' 00:07:30.576 16:42:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73117 ]] 00:07:30.576 16:42:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73117 00:07:30.576 16:42:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73117 ']' 00:07:30.576 16:42:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73117 00:07:30.576 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73117) - No such process 00:07:30.576 Process with pid 73117 is not found 00:07:30.576 16:42:54 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 73117 is not found' 00:07:30.576 16:42:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:30.576 00:07:30.576 real 0m17.584s 00:07:30.576 user 0m33.559s 00:07:30.576 sys 0m4.245s 00:07:30.576 16:42:54 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.576 16:42:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.576 ************************************ 00:07:30.576 END TEST cpu_locks 00:07:30.576 ************************************ 00:07:30.576 00:07:30.576 real 0m44.207s 00:07:30.576 user 1m28.807s 00:07:30.576 sys 0m7.383s 00:07:30.576 16:42:54 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.576 16:42:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.576 ************************************ 00:07:30.576 END TEST event 00:07:30.576 ************************************ 00:07:30.576 16:42:54 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:30.576 16:42:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.576 16:42:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.576 16:42:54 -- common/autotest_common.sh@10 -- # set +x 00:07:30.576 ************************************ 00:07:30.576 START TEST thread 00:07:30.576 ************************************ 00:07:30.576 16:42:54 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:30.576 * Looking for test storage... 00:07:30.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:30.576 16:42:54 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.576 16:42:54 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.576 16:42:54 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.835 16:42:54 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.835 16:42:54 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.835 16:42:54 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.835 16:42:54 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.835 16:42:54 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.835 16:42:54 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.835 16:42:54 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.835 16:42:54 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.835 16:42:54 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.835 16:42:54 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.835 16:42:54 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.835 16:42:54 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.835 16:42:54 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:30.835 16:42:54 thread -- scripts/common.sh@345 -- # : 1 00:07:30.835 16:42:54 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.835 16:42:54 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.835 16:42:54 thread -- scripts/common.sh@365 -- # decimal 1 00:07:30.835 16:42:54 thread -- scripts/common.sh@353 -- # local d=1 00:07:30.835 16:42:54 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.835 16:42:54 thread -- scripts/common.sh@355 -- # echo 1 00:07:30.835 16:42:54 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.835 16:42:54 thread -- scripts/common.sh@366 -- # decimal 2 00:07:30.835 16:42:54 thread -- scripts/common.sh@353 -- # local d=2 00:07:30.835 16:42:54 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.835 16:42:54 thread -- scripts/common.sh@355 -- # echo 2 00:07:30.835 16:42:54 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.835 16:42:54 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.835 16:42:54 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.835 16:42:54 thread -- scripts/common.sh@368 -- # return 0 00:07:30.835 16:42:54 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.835 16:42:54 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.835 --rc genhtml_branch_coverage=1 00:07:30.836 --rc genhtml_function_coverage=1 00:07:30.836 --rc genhtml_legend=1 00:07:30.836 --rc geninfo_all_blocks=1 00:07:30.836 --rc geninfo_unexecuted_blocks=1 00:07:30.836 00:07:30.836 ' 00:07:30.836 16:42:54 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.836 --rc genhtml_branch_coverage=1 00:07:30.836 --rc genhtml_function_coverage=1 00:07:30.836 --rc genhtml_legend=1 00:07:30.836 --rc geninfo_all_blocks=1 00:07:30.836 --rc geninfo_unexecuted_blocks=1 00:07:30.836 00:07:30.836 ' 00:07:30.836 16:42:54 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.836 --rc genhtml_branch_coverage=1 00:07:30.836 --rc genhtml_function_coverage=1 00:07:30.836 --rc genhtml_legend=1 00:07:30.836 --rc geninfo_all_blocks=1 00:07:30.836 --rc geninfo_unexecuted_blocks=1 00:07:30.836 00:07:30.836 ' 00:07:30.836 16:42:54 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.836 --rc genhtml_branch_coverage=1 00:07:30.836 --rc genhtml_function_coverage=1 00:07:30.836 --rc genhtml_legend=1 00:07:30.836 --rc geninfo_all_blocks=1 00:07:30.836 --rc geninfo_unexecuted_blocks=1 00:07:30.836 00:07:30.836 ' 00:07:30.836 16:42:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:30.836 16:42:54 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:30.836 16:42:54 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.836 16:42:54 thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.836 ************************************ 00:07:30.836 START TEST thread_poller_perf 00:07:30.836 ************************************ 00:07:30.836 16:42:54 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:30.836 [2024-11-29 16:42:54.468434] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:30.836 [2024-11-29 16:42:54.468547] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73247 ] 00:07:30.836 [2024-11-29 16:42:54.583939] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.836 [2024-11-29 16:42:54.608129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.094 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:31.094 [2024-11-29 16:42:54.628009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.029 [2024-11-29T16:42:55.821Z] ====================================== 00:07:32.029 [2024-11-29T16:42:55.821Z] busy:2210905702 (cyc) 00:07:32.029 [2024-11-29T16:42:55.821Z] total_run_count: 369000 00:07:32.029 [2024-11-29T16:42:55.821Z] tsc_hz: 2200000000 (cyc) 00:07:32.029 [2024-11-29T16:42:55.821Z] ====================================== 00:07:32.029 [2024-11-29T16:42:55.821Z] poller_cost: 5991 (cyc), 2723 (nsec) 00:07:32.029 00:07:32.029 real 0m1.213s 00:07:32.029 user 0m1.075s 00:07:32.029 sys 0m0.033s 00:07:32.029 16:42:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.029 16:42:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:32.029 ************************************ 00:07:32.029 END TEST thread_poller_perf 00:07:32.029 ************************************ 00:07:32.029 16:42:55 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:32.029 16:42:55 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:32.029 16:42:55 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.029 16:42:55 thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.029 ************************************ 00:07:32.029 START TEST thread_poller_perf 00:07:32.029 ************************************ 00:07:32.029 16:42:55 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:32.029 [2024-11-29 16:42:55.736832] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:32.029 [2024-11-29 16:42:55.736933] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73277 ] 00:07:32.287 [2024-11-29 16:42:55.851615] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.287 [2024-11-29 16:42:55.875162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.287 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:32.287 [2024-11-29 16:42:55.895826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.224 [2024-11-29T16:42:57.016Z] ====================================== 00:07:33.224 [2024-11-29T16:42:57.016Z] busy:2202429333 (cyc) 00:07:33.224 [2024-11-29T16:42:57.016Z] total_run_count: 4889000 00:07:33.224 [2024-11-29T16:42:57.016Z] tsc_hz: 2200000000 (cyc) 00:07:33.224 [2024-11-29T16:42:57.016Z] ====================================== 00:07:33.224 [2024-11-29T16:42:57.016Z] poller_cost: 450 (cyc), 204 (nsec) 00:07:33.224 00:07:33.224 real 0m1.206s 00:07:33.224 user 0m1.072s 00:07:33.224 sys 0m0.029s 00:07:33.224 16:42:56 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.224 16:42:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:33.224 ************************************ 00:07:33.224 END TEST thread_poller_perf 00:07:33.224 ************************************ 00:07:33.224 16:42:56 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:33.224 00:07:33.224 real 0m2.707s 00:07:33.224 user 0m2.305s 00:07:33.224 sys 0m0.187s 00:07:33.224 16:42:56 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.224 16:42:56 thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.224 ************************************ 00:07:33.224 END TEST thread 00:07:33.224 ************************************ 00:07:33.224 16:42:57 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:33.224 16:42:57 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:33.224 16:42:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.224 16:42:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.224 16:42:57 -- common/autotest_common.sh@10 -- # set +x 00:07:33.483 ************************************ 00:07:33.483 START TEST app_cmdline 00:07:33.483 ************************************ 00:07:33.483 16:42:57 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:33.483 * Looking for test storage... 00:07:33.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:33.483 16:42:57 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:33.483 16:42:57 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:33.483 16:42:57 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:33.483 16:42:57 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:33.483 16:42:57 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.483 16:42:57 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.483 16:42:57 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.483 16:42:57 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.483 16:42:57 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.483 16:42:57 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.483 16:42:57 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.483 16:42:57 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.483 16:42:57 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.484 16:42:57 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:33.484 16:42:57 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.484 16:42:57 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:33.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.484 --rc genhtml_branch_coverage=1 00:07:33.484 --rc genhtml_function_coverage=1 00:07:33.484 --rc genhtml_legend=1 00:07:33.484 --rc geninfo_all_blocks=1 00:07:33.484 --rc geninfo_unexecuted_blocks=1 00:07:33.484 00:07:33.484 ' 00:07:33.484 16:42:57 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:33.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.484 --rc genhtml_branch_coverage=1 00:07:33.484 --rc genhtml_function_coverage=1 00:07:33.484 --rc genhtml_legend=1 00:07:33.484 --rc geninfo_all_blocks=1 00:07:33.484 --rc geninfo_unexecuted_blocks=1 00:07:33.484 00:07:33.484 ' 00:07:33.484 16:42:57 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:33.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.484 --rc genhtml_branch_coverage=1 00:07:33.484 --rc genhtml_function_coverage=1 00:07:33.484 --rc genhtml_legend=1 00:07:33.484 --rc geninfo_all_blocks=1 00:07:33.484 --rc geninfo_unexecuted_blocks=1 00:07:33.484 00:07:33.484 ' 00:07:33.484 16:42:57 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:33.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.484 --rc genhtml_branch_coverage=1 00:07:33.484 --rc genhtml_function_coverage=1 00:07:33.484 --rc genhtml_legend=1 00:07:33.484 --rc geninfo_all_blocks=1 00:07:33.484 --rc geninfo_unexecuted_blocks=1 00:07:33.484 00:07:33.484 ' 00:07:33.484 16:42:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:33.484 16:42:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73360 00:07:33.484 16:42:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73360 00:07:33.484 16:42:57 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:33.484 16:42:57 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73360 ']' 00:07:33.484 16:42:57 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.484 16:42:57 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.484 16:42:57 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.484 16:42:57 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.484 16:42:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:33.484 [2024-11-29 16:42:57.269985] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:33.484 [2024-11-29 16:42:57.270611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73360 ] 00:07:33.743 [2024-11-29 16:42:57.396587] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:33.743 [2024-11-29 16:42:57.426572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.743 [2024-11-29 16:42:57.449276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.743 [2024-11-29 16:42:57.490320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.004 16:42:57 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.004 16:42:57 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:34.004 16:42:57 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:34.264 { 00:07:34.264 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:07:34.264 "fields": { 00:07:34.264 "major": 25, 00:07:34.264 "minor": 1, 00:07:34.264 "patch": 0, 00:07:34.264 "suffix": "-pre", 00:07:34.264 "commit": "35cd3e84d" 00:07:34.264 } 00:07:34.264 } 00:07:34.264 16:42:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:34.264 16:42:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:34.264 16:42:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:34.264 16:42:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:34.264 16:42:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:34.264 16:42:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:34.264 16:42:57 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.264 16:42:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:34.264 16:42:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:34.264 16:42:57 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.264 16:42:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:34.264 16:42:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:34.264 16:42:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.264 16:42:57 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:34.264 16:42:57 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.264 16:42:57 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.264 16:42:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.264 16:42:57 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.264 16:42:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.264 16:42:57 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.264 16:42:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.264 16:42:57 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.264 16:42:57 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:34.264 16:42:57 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.524 request: 00:07:34.524 { 00:07:34.524 "method": "env_dpdk_get_mem_stats", 00:07:34.524 "req_id": 1 00:07:34.524 } 00:07:34.524 Got JSON-RPC error response 00:07:34.524 response: 00:07:34.524 { 00:07:34.524 "code": -32601, 00:07:34.524 "message": "Method not found" 00:07:34.524 } 00:07:34.524 16:42:58 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:34.524 16:42:58 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.524 16:42:58 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:34.524 16:42:58 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.524 16:42:58 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73360 00:07:34.524 16:42:58 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73360 ']' 00:07:34.524 16:42:58 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73360 00:07:34.524 16:42:58 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:34.524 16:42:58 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.524 16:42:58 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73360 00:07:34.524 killing process with pid 73360 00:07:34.524 16:42:58 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.524 16:42:58 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.524 16:42:58 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73360' 00:07:34.524 16:42:58 app_cmdline -- common/autotest_common.sh@973 -- # kill 73360 00:07:34.524 16:42:58 app_cmdline -- common/autotest_common.sh@978 -- # wait 73360 00:07:34.783 00:07:34.783 real 0m1.516s 00:07:34.783 user 0m2.014s 00:07:34.783 sys 0m0.376s 00:07:34.783 16:42:58 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.783 ************************************ 00:07:34.783 END TEST app_cmdline 00:07:34.783 ************************************ 00:07:34.783 16:42:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.042 16:42:58 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:35.042 16:42:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.042 16:42:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.042 16:42:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.042 ************************************ 00:07:35.042 START TEST version 00:07:35.042 ************************************ 00:07:35.042 16:42:58 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:35.042 * Looking for test storage... 00:07:35.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:35.042 16:42:58 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:35.042 16:42:58 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:35.042 16:42:58 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:35.042 16:42:58 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:35.042 16:42:58 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.042 16:42:58 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.042 16:42:58 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.042 16:42:58 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.042 16:42:58 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.042 16:42:58 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.042 16:42:58 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.042 16:42:58 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.042 16:42:58 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.042 16:42:58 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.042 16:42:58 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.042 16:42:58 version -- scripts/common.sh@344 -- # case "$op" in 00:07:35.042 16:42:58 version -- scripts/common.sh@345 -- # : 1 00:07:35.042 16:42:58 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.042 16:42:58 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.042 16:42:58 version -- scripts/common.sh@365 -- # decimal 1 00:07:35.042 16:42:58 version -- scripts/common.sh@353 -- # local d=1 00:07:35.042 16:42:58 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.042 16:42:58 version -- scripts/common.sh@355 -- # echo 1 00:07:35.043 16:42:58 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.043 16:42:58 version -- scripts/common.sh@366 -- # decimal 2 00:07:35.043 16:42:58 version -- scripts/common.sh@353 -- # local d=2 00:07:35.043 16:42:58 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.043 16:42:58 version -- scripts/common.sh@355 -- # echo 2 00:07:35.043 16:42:58 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.043 16:42:58 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.043 16:42:58 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.043 16:42:58 version -- scripts/common.sh@368 -- # return 0 00:07:35.043 16:42:58 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.043 16:42:58 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:35.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.043 --rc genhtml_branch_coverage=1 00:07:35.043 --rc genhtml_function_coverage=1 00:07:35.043 --rc genhtml_legend=1 00:07:35.043 --rc geninfo_all_blocks=1 00:07:35.043 --rc geninfo_unexecuted_blocks=1 00:07:35.043 00:07:35.043 ' 00:07:35.043 16:42:58 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:35.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.043 --rc genhtml_branch_coverage=1 00:07:35.043 --rc genhtml_function_coverage=1 00:07:35.043 --rc genhtml_legend=1 00:07:35.043 --rc geninfo_all_blocks=1 00:07:35.043 --rc geninfo_unexecuted_blocks=1 00:07:35.043 00:07:35.043 ' 00:07:35.043 16:42:58 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:35.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.043 --rc genhtml_branch_coverage=1 00:07:35.043 --rc genhtml_function_coverage=1 00:07:35.043 --rc genhtml_legend=1 00:07:35.043 --rc geninfo_all_blocks=1 00:07:35.043 --rc geninfo_unexecuted_blocks=1 00:07:35.043 00:07:35.043 ' 00:07:35.043 16:42:58 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:35.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.043 --rc genhtml_branch_coverage=1 00:07:35.043 --rc genhtml_function_coverage=1 00:07:35.043 --rc genhtml_legend=1 00:07:35.043 --rc geninfo_all_blocks=1 00:07:35.043 --rc geninfo_unexecuted_blocks=1 00:07:35.043 00:07:35.043 ' 00:07:35.043 16:42:58 version -- app/version.sh@17 -- # get_header_version major 00:07:35.043 16:42:58 version -- app/version.sh@14 -- # cut -f2 00:07:35.043 16:42:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.043 16:42:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.043 16:42:58 version -- app/version.sh@17 -- # major=25 00:07:35.043 16:42:58 version -- app/version.sh@18 -- # get_header_version minor 00:07:35.043 16:42:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.043 16:42:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.043 16:42:58 version -- app/version.sh@14 -- # cut -f2 00:07:35.043 16:42:58 version -- app/version.sh@18 -- # minor=1 00:07:35.043 16:42:58 version -- app/version.sh@19 -- # get_header_version patch 00:07:35.043 16:42:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.043 16:42:58 version -- app/version.sh@14 -- # cut -f2 00:07:35.043 16:42:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.043 16:42:58 version -- app/version.sh@19 -- # patch=0 00:07:35.043 16:42:58 version -- app/version.sh@20 -- # get_header_version suffix 00:07:35.043 16:42:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.043 16:42:58 version -- app/version.sh@14 -- # cut -f2 00:07:35.043 16:42:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.043 16:42:58 version -- app/version.sh@20 -- # suffix=-pre 00:07:35.043 16:42:58 version -- app/version.sh@22 -- # version=25.1 00:07:35.043 16:42:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:35.043 16:42:58 version -- app/version.sh@28 -- # version=25.1rc0 00:07:35.043 16:42:58 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:35.043 16:42:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:35.302 16:42:58 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:35.302 16:42:58 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:35.302 00:07:35.302 real 0m0.255s 00:07:35.302 user 0m0.161s 00:07:35.302 sys 0m0.127s 00:07:35.302 ************************************ 00:07:35.302 END TEST version 00:07:35.302 ************************************ 00:07:35.302 16:42:58 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.302 16:42:58 version -- common/autotest_common.sh@10 -- # set +x 00:07:35.302 16:42:58 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:35.302 16:42:58 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:35.302 16:42:58 -- spdk/autotest.sh@194 -- # uname -s 00:07:35.302 16:42:58 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:35.302 16:42:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:35.302 16:42:58 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:35.302 16:42:58 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:35.302 16:42:58 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:35.302 16:42:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.302 16:42:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.302 16:42:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.302 ************************************ 00:07:35.302 START TEST spdk_dd 00:07:35.302 ************************************ 00:07:35.302 16:42:58 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:35.302 * Looking for test storage... 00:07:35.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:35.302 16:42:58 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:35.302 16:42:58 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:35.302 16:42:58 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:07:35.302 16:42:59 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.302 16:42:59 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:35.303 16:42:59 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.303 16:42:59 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:35.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.303 --rc genhtml_branch_coverage=1 00:07:35.303 --rc genhtml_function_coverage=1 00:07:35.303 --rc genhtml_legend=1 00:07:35.303 --rc geninfo_all_blocks=1 00:07:35.303 --rc geninfo_unexecuted_blocks=1 00:07:35.303 00:07:35.303 ' 00:07:35.303 16:42:59 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:35.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.303 --rc genhtml_branch_coverage=1 00:07:35.303 --rc genhtml_function_coverage=1 00:07:35.303 --rc genhtml_legend=1 00:07:35.303 --rc geninfo_all_blocks=1 00:07:35.303 --rc geninfo_unexecuted_blocks=1 00:07:35.303 00:07:35.303 ' 00:07:35.303 16:42:59 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:35.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.303 --rc genhtml_branch_coverage=1 00:07:35.303 --rc genhtml_function_coverage=1 00:07:35.303 --rc genhtml_legend=1 00:07:35.303 --rc geninfo_all_blocks=1 00:07:35.303 --rc geninfo_unexecuted_blocks=1 00:07:35.303 00:07:35.303 ' 00:07:35.303 16:42:59 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:35.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.303 --rc genhtml_branch_coverage=1 00:07:35.303 --rc genhtml_function_coverage=1 00:07:35.303 --rc genhtml_legend=1 00:07:35.303 --rc geninfo_all_blocks=1 00:07:35.303 --rc geninfo_unexecuted_blocks=1 00:07:35.303 00:07:35.303 ' 00:07:35.303 16:42:59 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.303 16:42:59 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.303 16:42:59 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.303 16:42:59 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.303 16:42:59 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.303 16:42:59 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:35.303 16:42:59 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.562 16:42:59 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:35.823 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:35.823 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:35.823 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:35.823 16:42:59 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:35.823 16:42:59 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:35.823 16:42:59 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:35.823 16:42:59 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:35.823 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_power_acpi.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_power_amd_pstate.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_power_cppc.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_power_intel_pstate.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_power_intel_uncore.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_power_kvm_vm.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.25 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:35.824 * spdk_dd linked to liburing 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:35.824 16:42:59 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:35.824 16:42:59 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:35.825 16:42:59 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:35.825 16:42:59 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:35.825 16:42:59 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:35.825 16:42:59 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:35.825 16:42:59 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:35.825 16:42:59 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:35.825 16:42:59 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:35.825 16:42:59 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:35.825 16:42:59 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.825 16:42:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:36.085 ************************************ 00:07:36.085 START TEST spdk_dd_basic_rw 00:07:36.085 ************************************ 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:36.085 * Looking for test storage... 00:07:36.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:36.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.085 --rc genhtml_branch_coverage=1 00:07:36.085 --rc genhtml_function_coverage=1 00:07:36.085 --rc genhtml_legend=1 00:07:36.085 --rc geninfo_all_blocks=1 00:07:36.085 --rc geninfo_unexecuted_blocks=1 00:07:36.085 00:07:36.085 ' 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:36.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.085 --rc genhtml_branch_coverage=1 00:07:36.085 --rc genhtml_function_coverage=1 00:07:36.085 --rc genhtml_legend=1 00:07:36.085 --rc geninfo_all_blocks=1 00:07:36.085 --rc geninfo_unexecuted_blocks=1 00:07:36.085 00:07:36.085 ' 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:36.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.085 --rc genhtml_branch_coverage=1 00:07:36.085 --rc genhtml_function_coverage=1 00:07:36.085 --rc genhtml_legend=1 00:07:36.085 --rc geninfo_all_blocks=1 00:07:36.085 --rc geninfo_unexecuted_blocks=1 00:07:36.085 00:07:36.085 ' 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:36.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.085 --rc genhtml_branch_coverage=1 00:07:36.085 --rc genhtml_function_coverage=1 00:07:36.085 --rc genhtml_legend=1 00:07:36.085 --rc geninfo_all_blocks=1 00:07:36.085 --rc geninfo_unexecuted_blocks=1 00:07:36.085 00:07:36.085 ' 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:36.085 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:36.346 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:36.346 16:42:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.347 ************************************ 00:07:36.347 START TEST dd_bs_lt_native_bs 00:07:36.347 ************************************ 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.347 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.348 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.348 { 00:07:36.348 "subsystems": [ 00:07:36.348 { 00:07:36.348 "subsystem": "bdev", 00:07:36.348 "config": [ 00:07:36.348 { 00:07:36.348 "params": { 00:07:36.348 "trtype": "pcie", 00:07:36.348 "traddr": "0000:00:10.0", 00:07:36.348 "name": "Nvme0" 00:07:36.348 }, 00:07:36.348 "method": "bdev_nvme_attach_controller" 00:07:36.348 }, 00:07:36.348 { 00:07:36.348 "method": "bdev_wait_for_examine" 00:07:36.348 } 00:07:36.348 ] 00:07:36.348 } 00:07:36.348 ] 00:07:36.348 } 00:07:36.348 [2024-11-29 16:43:00.074417] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:36.348 [2024-11-29 16:43:00.074527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73704 ] 00:07:36.607 [2024-11-29 16:43:00.200133] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.607 [2024-11-29 16:43:00.231788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.607 [2024-11-29 16:43:00.255452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.607 [2024-11-29 16:43:00.289069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.607 [2024-11-29 16:43:00.382380] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:36.607 [2024-11-29 16:43:00.382487] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.866 [2024-11-29 16:43:00.456802] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:36.867 00:07:36.867 real 0m0.494s 00:07:36.867 user 0m0.336s 00:07:36.867 sys 0m0.117s 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:36.867 ************************************ 00:07:36.867 END TEST dd_bs_lt_native_bs 00:07:36.867 ************************************ 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.867 ************************************ 00:07:36.867 START TEST dd_rw 00:07:36.867 ************************************ 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:36.867 16:43:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.436 16:43:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:37.436 16:43:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:37.436 16:43:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.436 16:43:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.436 [2024-11-29 16:43:01.210981] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:37.436 [2024-11-29 16:43:01.211119] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73735 ] 00:07:37.436 { 00:07:37.436 "subsystems": [ 00:07:37.436 { 00:07:37.436 "subsystem": "bdev", 00:07:37.436 "config": [ 00:07:37.436 { 00:07:37.436 "params": { 00:07:37.436 "trtype": "pcie", 00:07:37.436 "traddr": "0000:00:10.0", 00:07:37.436 "name": "Nvme0" 00:07:37.436 }, 00:07:37.436 "method": "bdev_nvme_attach_controller" 00:07:37.436 }, 00:07:37.436 { 00:07:37.436 "method": "bdev_wait_for_examine" 00:07:37.436 } 00:07:37.436 ] 00:07:37.436 } 00:07:37.436 ] 00:07:37.436 } 00:07:37.695 [2024-11-29 16:43:01.338087] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:37.695 [2024-11-29 16:43:01.365866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.695 [2024-11-29 16:43:01.385541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.695 [2024-11-29 16:43:01.414797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.954  [2024-11-29T16:43:01.746Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:37.954 00:07:37.954 16:43:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:37.954 16:43:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:37.954 16:43:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.954 16:43:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.954 { 00:07:37.954 "subsystems": [ 00:07:37.954 { 00:07:37.954 "subsystem": "bdev", 00:07:37.954 "config": [ 00:07:37.954 { 00:07:37.954 "params": { 00:07:37.954 "trtype": "pcie", 00:07:37.954 "traddr": "0000:00:10.0", 00:07:37.954 "name": "Nvme0" 00:07:37.954 }, 00:07:37.954 "method": "bdev_nvme_attach_controller" 00:07:37.954 }, 00:07:37.954 { 00:07:37.954 "method": "bdev_wait_for_examine" 00:07:37.954 } 00:07:37.954 ] 00:07:37.954 } 00:07:37.954 ] 00:07:37.954 } 00:07:37.954 [2024-11-29 16:43:01.684816] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:37.954 [2024-11-29 16:43:01.684943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73748 ] 00:07:38.214 [2024-11-29 16:43:01.812242] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.214 [2024-11-29 16:43:01.836116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.214 [2024-11-29 16:43:01.854881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.214 [2024-11-29 16:43:01.883729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.214  [2024-11-29T16:43:02.266Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:38.474 00:07:38.474 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.474 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:38.474 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:38.474 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:38.474 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:38.474 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:38.474 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:38.474 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:38.474 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:38.474 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.474 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.474 { 00:07:38.474 "subsystems": [ 00:07:38.474 { 00:07:38.474 "subsystem": "bdev", 00:07:38.474 "config": [ 00:07:38.474 { 00:07:38.474 "params": { 00:07:38.474 "trtype": "pcie", 00:07:38.474 "traddr": "0000:00:10.0", 00:07:38.474 "name": "Nvme0" 00:07:38.474 }, 00:07:38.474 "method": "bdev_nvme_attach_controller" 00:07:38.474 }, 00:07:38.474 { 00:07:38.474 "method": "bdev_wait_for_examine" 00:07:38.474 } 00:07:38.474 ] 00:07:38.474 } 00:07:38.474 ] 00:07:38.474 } 00:07:38.474 [2024-11-29 16:43:02.154303] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:38.474 [2024-11-29 16:43:02.154431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73764 ] 00:07:38.733 [2024-11-29 16:43:02.280032] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.733 [2024-11-29 16:43:02.305506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.733 [2024-11-29 16:43:02.324904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.733 [2024-11-29 16:43:02.353353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.733  [2024-11-29T16:43:02.784Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:38.992 00:07:38.992 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:38.992 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:38.992 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:38.992 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:38.992 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:38.992 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:38.992 16:43:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.561 16:43:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:39.561 16:43:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:39.561 16:43:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.561 16:43:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.561 { 00:07:39.561 "subsystems": [ 00:07:39.561 { 00:07:39.561 "subsystem": "bdev", 00:07:39.561 "config": [ 00:07:39.561 { 00:07:39.561 "params": { 00:07:39.561 "trtype": "pcie", 00:07:39.561 "traddr": "0000:00:10.0", 00:07:39.561 "name": "Nvme0" 00:07:39.561 }, 00:07:39.561 "method": "bdev_nvme_attach_controller" 00:07:39.561 }, 00:07:39.561 { 00:07:39.561 "method": "bdev_wait_for_examine" 00:07:39.561 } 00:07:39.561 ] 00:07:39.561 } 00:07:39.561 ] 00:07:39.561 } 00:07:39.561 [2024-11-29 16:43:03.193770] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:39.561 [2024-11-29 16:43:03.193905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73783 ] 00:07:39.561 [2024-11-29 16:43:03.320781] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.561 [2024-11-29 16:43:03.348558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.820 [2024-11-29 16:43:03.368787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.820 [2024-11-29 16:43:03.397846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.820  [2024-11-29T16:43:03.612Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:39.820 00:07:39.820 16:43:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:39.820 16:43:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:39.820 16:43:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.820 16:43:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.080 { 00:07:40.080 "subsystems": [ 00:07:40.080 { 00:07:40.080 "subsystem": "bdev", 00:07:40.080 "config": [ 00:07:40.080 { 00:07:40.080 "params": { 00:07:40.080 "trtype": "pcie", 00:07:40.080 "traddr": "0000:00:10.0", 00:07:40.080 "name": "Nvme0" 00:07:40.080 }, 00:07:40.080 "method": "bdev_nvme_attach_controller" 00:07:40.080 }, 00:07:40.080 { 00:07:40.080 "method": "bdev_wait_for_examine" 00:07:40.080 } 00:07:40.080 ] 00:07:40.080 } 00:07:40.080 ] 00:07:40.080 } 00:07:40.080 [2024-11-29 16:43:03.666633] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:40.080 [2024-11-29 16:43:03.666788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73791 ] 00:07:40.080 [2024-11-29 16:43:03.793007] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.080 [2024-11-29 16:43:03.821769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.080 [2024-11-29 16:43:03.844165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.340 [2024-11-29 16:43:03.874771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.340  [2024-11-29T16:43:04.132Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:40.340 00:07:40.340 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.340 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:40.340 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:40.340 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:40.340 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:40.340 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:40.340 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:40.340 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:40.340 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:40.340 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:40.340 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.599 { 00:07:40.599 "subsystems": [ 00:07:40.599 { 00:07:40.599 "subsystem": "bdev", 00:07:40.599 "config": [ 00:07:40.599 { 00:07:40.599 "params": { 00:07:40.599 "trtype": "pcie", 00:07:40.599 "traddr": "0000:00:10.0", 00:07:40.599 "name": "Nvme0" 00:07:40.599 }, 00:07:40.599 "method": "bdev_nvme_attach_controller" 00:07:40.599 }, 00:07:40.599 { 00:07:40.599 "method": "bdev_wait_for_examine" 00:07:40.599 } 00:07:40.599 ] 00:07:40.599 } 00:07:40.599 ] 00:07:40.599 } 00:07:40.599 [2024-11-29 16:43:04.150712] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:40.599 [2024-11-29 16:43:04.150846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73812 ] 00:07:40.599 [2024-11-29 16:43:04.277772] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.599 [2024-11-29 16:43:04.304338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.599 [2024-11-29 16:43:04.323086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.599 [2024-11-29 16:43:04.350240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.858  [2024-11-29T16:43:04.650Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:40.858 00:07:40.858 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:40.858 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:40.858 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:40.858 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:40.858 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:40.858 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:40.859 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:40.859 16:43:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.430 16:43:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:41.430 16:43:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:41.430 16:43:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.430 16:43:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.430 { 00:07:41.430 "subsystems": [ 00:07:41.430 { 00:07:41.430 "subsystem": "bdev", 00:07:41.430 "config": [ 00:07:41.430 { 00:07:41.430 "params": { 00:07:41.430 "trtype": "pcie", 00:07:41.430 "traddr": "0000:00:10.0", 00:07:41.430 "name": "Nvme0" 00:07:41.430 }, 00:07:41.430 "method": "bdev_nvme_attach_controller" 00:07:41.430 }, 00:07:41.430 { 00:07:41.430 "method": "bdev_wait_for_examine" 00:07:41.430 } 00:07:41.430 ] 00:07:41.430 } 00:07:41.430 ] 00:07:41.430 } 00:07:41.430 [2024-11-29 16:43:05.098526] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:41.430 [2024-11-29 16:43:05.098655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73825 ] 00:07:41.696 [2024-11-29 16:43:05.225739] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.696 [2024-11-29 16:43:05.253482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.696 [2024-11-29 16:43:05.273800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.696 [2024-11-29 16:43:05.302672] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.696  [2024-11-29T16:43:05.764Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:41.972 00:07:41.972 16:43:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:41.972 16:43:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:41.972 16:43:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.972 16:43:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.972 { 00:07:41.972 "subsystems": [ 00:07:41.972 { 00:07:41.972 "subsystem": "bdev", 00:07:41.972 "config": [ 00:07:41.972 { 00:07:41.972 "params": { 00:07:41.972 "trtype": "pcie", 00:07:41.972 "traddr": "0000:00:10.0", 00:07:41.972 "name": "Nvme0" 00:07:41.972 }, 00:07:41.972 "method": "bdev_nvme_attach_controller" 00:07:41.972 }, 00:07:41.972 { 00:07:41.972 "method": "bdev_wait_for_examine" 00:07:41.972 } 00:07:41.972 ] 00:07:41.972 } 00:07:41.972 ] 00:07:41.972 } 00:07:41.972 [2024-11-29 16:43:05.575552] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:41.972 [2024-11-29 16:43:05.575693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73839 ] 00:07:41.972 [2024-11-29 16:43:05.702260] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.972 [2024-11-29 16:43:05.731103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.241 [2024-11-29 16:43:05.754856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.241 [2024-11-29 16:43:05.786249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.241  [2024-11-29T16:43:06.033Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:42.241 00:07:42.241 16:43:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.241 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:42.241 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:42.241 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:42.241 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:42.241 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:42.241 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:42.241 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:42.241 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:42.241 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:42.241 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.501 { 00:07:42.501 "subsystems": [ 00:07:42.501 { 00:07:42.501 "subsystem": "bdev", 00:07:42.501 "config": [ 00:07:42.501 { 00:07:42.501 "params": { 00:07:42.501 "trtype": "pcie", 00:07:42.501 "traddr": "0000:00:10.0", 00:07:42.501 "name": "Nvme0" 00:07:42.501 }, 00:07:42.501 "method": "bdev_nvme_attach_controller" 00:07:42.501 }, 00:07:42.501 { 00:07:42.501 "method": "bdev_wait_for_examine" 00:07:42.501 } 00:07:42.501 ] 00:07:42.501 } 00:07:42.501 ] 00:07:42.501 } 00:07:42.501 [2024-11-29 16:43:06.067620] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:42.501 [2024-11-29 16:43:06.067721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73854 ] 00:07:42.501 [2024-11-29 16:43:06.194686] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:42.501 [2024-11-29 16:43:06.220781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.501 [2024-11-29 16:43:06.240277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.501 [2024-11-29 16:43:06.269302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.760  [2024-11-29T16:43:06.552Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:42.760 00:07:42.760 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:42.760 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:42.760 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:42.760 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:42.760 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:42.760 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:42.760 16:43:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.324 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:43.324 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:43.324 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.324 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.324 { 00:07:43.324 "subsystems": [ 00:07:43.324 { 00:07:43.324 "subsystem": "bdev", 00:07:43.324 "config": [ 00:07:43.324 { 00:07:43.324 "params": { 00:07:43.324 "trtype": "pcie", 00:07:43.324 "traddr": "0000:00:10.0", 00:07:43.324 "name": "Nvme0" 00:07:43.324 }, 00:07:43.324 "method": "bdev_nvme_attach_controller" 00:07:43.324 }, 00:07:43.324 { 00:07:43.324 "method": "bdev_wait_for_examine" 00:07:43.324 } 00:07:43.324 ] 00:07:43.324 } 00:07:43.324 ] 00:07:43.324 } 00:07:43.324 [2024-11-29 16:43:07.071187] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:43.324 [2024-11-29 16:43:07.071295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73873 ] 00:07:43.581 [2024-11-29 16:43:07.196842] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:43.581 [2024-11-29 16:43:07.223735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.581 [2024-11-29 16:43:07.242291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.581 [2024-11-29 16:43:07.269539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.581  [2024-11-29T16:43:07.631Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:43.839 00:07:43.839 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:43.839 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:43.839 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.839 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.839 { 00:07:43.839 "subsystems": [ 00:07:43.839 { 00:07:43.839 "subsystem": "bdev", 00:07:43.839 "config": [ 00:07:43.839 { 00:07:43.839 "params": { 00:07:43.839 "trtype": "pcie", 00:07:43.839 "traddr": "0000:00:10.0", 00:07:43.839 "name": "Nvme0" 00:07:43.839 }, 00:07:43.839 "method": "bdev_nvme_attach_controller" 00:07:43.839 }, 00:07:43.839 { 00:07:43.839 "method": "bdev_wait_for_examine" 00:07:43.839 } 00:07:43.839 ] 00:07:43.839 } 00:07:43.839 ] 00:07:43.839 } 00:07:43.839 [2024-11-29 16:43:07.538158] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:43.839 [2024-11-29 16:43:07.538297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73887 ] 00:07:44.098 [2024-11-29 16:43:07.664586] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.098 [2024-11-29 16:43:07.691519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.098 [2024-11-29 16:43:07.711028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.098 [2024-11-29 16:43:07.740013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.098  [2024-11-29T16:43:08.147Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:44.355 00:07:44.355 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.355 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:44.355 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:44.355 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:44.355 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:44.355 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:44.355 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:44.355 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:44.355 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:44.355 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:44.355 16:43:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.355 { 00:07:44.355 "subsystems": [ 00:07:44.355 { 00:07:44.355 "subsystem": "bdev", 00:07:44.355 "config": [ 00:07:44.355 { 00:07:44.355 "params": { 00:07:44.355 "trtype": "pcie", 00:07:44.355 "traddr": "0000:00:10.0", 00:07:44.355 "name": "Nvme0" 00:07:44.355 }, 00:07:44.355 "method": "bdev_nvme_attach_controller" 00:07:44.355 }, 00:07:44.355 { 00:07:44.355 "method": "bdev_wait_for_examine" 00:07:44.355 } 00:07:44.355 ] 00:07:44.355 } 00:07:44.355 ] 00:07:44.355 } 00:07:44.355 [2024-11-29 16:43:08.025719] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:44.356 [2024-11-29 16:43:08.025914] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73902 ] 00:07:44.614 [2024-11-29 16:43:08.165078] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.614 [2024-11-29 16:43:08.191538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.614 [2024-11-29 16:43:08.212279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.614 [2024-11-29 16:43:08.241896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.614  [2024-11-29T16:43:08.664Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:44.872 00:07:44.872 16:43:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:44.872 16:43:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:44.873 16:43:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:44.873 16:43:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:44.873 16:43:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:44.873 16:43:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:44.873 16:43:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:44.873 16:43:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.439 16:43:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:45.439 16:43:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:45.439 16:43:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:45.439 16:43:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.439 [2024-11-29 16:43:08.980554] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:45.439 [2024-11-29 16:43:08.980673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73921 ] 00:07:45.439 { 00:07:45.439 "subsystems": [ 00:07:45.439 { 00:07:45.439 "subsystem": "bdev", 00:07:45.439 "config": [ 00:07:45.439 { 00:07:45.439 "params": { 00:07:45.439 "trtype": "pcie", 00:07:45.439 "traddr": "0000:00:10.0", 00:07:45.439 "name": "Nvme0" 00:07:45.439 }, 00:07:45.439 "method": "bdev_nvme_attach_controller" 00:07:45.439 }, 00:07:45.439 { 00:07:45.439 "method": "bdev_wait_for_examine" 00:07:45.439 } 00:07:45.439 ] 00:07:45.439 } 00:07:45.439 ] 00:07:45.439 } 00:07:45.439 [2024-11-29 16:43:09.106198] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.439 [2024-11-29 16:43:09.133763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.439 [2024-11-29 16:43:09.154352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.439 [2024-11-29 16:43:09.186756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.698  [2024-11-29T16:43:09.490Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:45.698 00:07:45.698 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:45.698 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:45.698 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:45.698 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.698 { 00:07:45.698 "subsystems": [ 00:07:45.698 { 00:07:45.698 "subsystem": "bdev", 00:07:45.698 "config": [ 00:07:45.698 { 00:07:45.698 "params": { 00:07:45.698 "trtype": "pcie", 00:07:45.698 "traddr": "0000:00:10.0", 00:07:45.698 "name": "Nvme0" 00:07:45.698 }, 00:07:45.698 "method": "bdev_nvme_attach_controller" 00:07:45.698 }, 00:07:45.698 { 00:07:45.698 "method": "bdev_wait_for_examine" 00:07:45.698 } 00:07:45.698 ] 00:07:45.698 } 00:07:45.698 ] 00:07:45.698 } 00:07:45.698 [2024-11-29 16:43:09.449748] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:45.698 [2024-11-29 16:43:09.449868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73935 ] 00:07:45.958 [2024-11-29 16:43:09.575426] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.958 [2024-11-29 16:43:09.600176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.958 [2024-11-29 16:43:09.619100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.958 [2024-11-29 16:43:09.646279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.958  [2024-11-29T16:43:10.010Z] Copying: 48/48 [kB] (average 23 MBps) 00:07:46.218 00:07:46.218 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.218 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:46.218 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:46.218 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:46.218 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:46.218 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:46.218 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:46.218 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:46.218 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:46.218 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.218 16:43:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.218 { 00:07:46.218 "subsystems": [ 00:07:46.218 { 00:07:46.218 "subsystem": "bdev", 00:07:46.218 "config": [ 00:07:46.218 { 00:07:46.218 "params": { 00:07:46.218 "trtype": "pcie", 00:07:46.218 "traddr": "0000:00:10.0", 00:07:46.218 "name": "Nvme0" 00:07:46.218 }, 00:07:46.218 "method": "bdev_nvme_attach_controller" 00:07:46.218 }, 00:07:46.218 { 00:07:46.218 "method": "bdev_wait_for_examine" 00:07:46.218 } 00:07:46.218 ] 00:07:46.218 } 00:07:46.218 ] 00:07:46.218 } 00:07:46.218 [2024-11-29 16:43:09.921841] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:46.218 [2024-11-29 16:43:09.921949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73945 ] 00:07:46.477 [2024-11-29 16:43:10.049088] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:46.477 [2024-11-29 16:43:10.075852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.477 [2024-11-29 16:43:10.095602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.477 [2024-11-29 16:43:10.127224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.477  [2024-11-29T16:43:10.528Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:46.736 00:07:46.736 16:43:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:46.736 16:43:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:46.736 16:43:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:46.736 16:43:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:46.736 16:43:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:46.736 16:43:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:46.736 16:43:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.996 16:43:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:46.996 16:43:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:46.996 16:43:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.996 16:43:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.255 { 00:07:47.255 "subsystems": [ 00:07:47.255 { 00:07:47.255 "subsystem": "bdev", 00:07:47.255 "config": [ 00:07:47.255 { 00:07:47.256 "params": { 00:07:47.256 "trtype": "pcie", 00:07:47.256 "traddr": "0000:00:10.0", 00:07:47.256 "name": "Nvme0" 00:07:47.256 }, 00:07:47.256 "method": "bdev_nvme_attach_controller" 00:07:47.256 }, 00:07:47.256 { 00:07:47.256 "method": "bdev_wait_for_examine" 00:07:47.256 } 00:07:47.256 ] 00:07:47.256 } 00:07:47.256 ] 00:07:47.256 } 00:07:47.256 [2024-11-29 16:43:10.844529] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:47.256 [2024-11-29 16:43:10.844653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73964 ] 00:07:47.256 [2024-11-29 16:43:10.971802] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:47.256 [2024-11-29 16:43:10.996430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.256 [2024-11-29 16:43:11.016546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.256 [2024-11-29 16:43:11.045524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.515  [2024-11-29T16:43:11.307Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:47.515 00:07:47.515 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:47.515 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:47.515 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:47.515 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.774 [2024-11-29 16:43:11.313601] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:47.774 [2024-11-29 16:43:11.313739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73983 ] 00:07:47.774 { 00:07:47.774 "subsystems": [ 00:07:47.774 { 00:07:47.774 "subsystem": "bdev", 00:07:47.774 "config": [ 00:07:47.774 { 00:07:47.774 "params": { 00:07:47.774 "trtype": "pcie", 00:07:47.774 "traddr": "0000:00:10.0", 00:07:47.774 "name": "Nvme0" 00:07:47.774 }, 00:07:47.774 "method": "bdev_nvme_attach_controller" 00:07:47.774 }, 00:07:47.774 { 00:07:47.774 "method": "bdev_wait_for_examine" 00:07:47.774 } 00:07:47.774 ] 00:07:47.774 } 00:07:47.774 ] 00:07:47.774 } 00:07:47.774 [2024-11-29 16:43:11.433275] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:47.774 [2024-11-29 16:43:11.458511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.774 [2024-11-29 16:43:11.478002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.774 [2024-11-29 16:43:11.506906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.033  [2024-11-29T16:43:11.825Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:48.033 00:07:48.033 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.033 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:48.033 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:48.033 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:48.033 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:48.033 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:48.033 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:48.033 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:48.033 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:48.033 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:48.033 16:43:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.033 [2024-11-29 16:43:11.772214] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:48.033 [2024-11-29 16:43:11.772370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73993 ] 00:07:48.033 { 00:07:48.033 "subsystems": [ 00:07:48.033 { 00:07:48.033 "subsystem": "bdev", 00:07:48.033 "config": [ 00:07:48.033 { 00:07:48.033 "params": { 00:07:48.033 "trtype": "pcie", 00:07:48.033 "traddr": "0000:00:10.0", 00:07:48.033 "name": "Nvme0" 00:07:48.033 }, 00:07:48.033 "method": "bdev_nvme_attach_controller" 00:07:48.033 }, 00:07:48.033 { 00:07:48.033 "method": "bdev_wait_for_examine" 00:07:48.033 } 00:07:48.033 ] 00:07:48.033 } 00:07:48.033 ] 00:07:48.033 } 00:07:48.293 [2024-11-29 16:43:11.898900] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:48.293 [2024-11-29 16:43:11.922843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.293 [2024-11-29 16:43:11.941341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.293 [2024-11-29 16:43:11.968444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.293  [2024-11-29T16:43:12.344Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:48.552 00:07:48.552 00:07:48.552 real 0m11.618s 00:07:48.552 user 0m8.573s 00:07:48.552 sys 0m3.763s 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.552 ************************************ 00:07:48.552 END TEST dd_rw 00:07:48.552 ************************************ 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.552 ************************************ 00:07:48.552 START TEST dd_rw_offset 00:07:48.552 ************************************ 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=baa0p6fvc0zmwutwtj8yeed1gzz2hh3a7ym7qekqrbh9o9jnu9qe5oxuy2rg7d84l4u9dynbqqhbwfu2m01innunixboqxt35bn4stg38luhvdns86ttucw3hqmf9p1pb8xrf6pyk3p0slpdqcm1sza2a2mau1t4sim6uulji1uckksphp3x3t5jw0348tacw3jabojw65ohqii5it6mmb3fwebp8dot49erytbr30ii65nmgzhwf0jr59arysichr2skx62nsm9ih6ys5blnnz78z4sgcsn4xx26p11begimbx6mvvfflwhvm5am2ivp8xr9r7mgbnqnsrb8osyet5rsws10gfg87qenccpf0up9gl2cme2xcyyd61gzyeyein8ne0bh3il9fsn5yhxy3s0h78wb4fns5i1uk2boxjvaps464jqe28436xuurljmgbkkmgoknuauak1knpit2ec1bnkpsrofeek19541c4i9377jn4zt1c7a6kdxpd2dim1awpljsvv4cuta60s21m6zylniykisyenygfhmec7xbuahz33brvlhtdiyiunlgrgfor2v89jphupv3hzbc34zvhzg4pqa8mc2gy4ff3ajfrxb3x5cwuito1kx60qc2rgsajg00xkxqmzozp14z3ihem1kmbqynowjp0cb3nd8domx3jc8rdo3xtkphye281jzpwxbrpigy9pm0aif3f2zze8iliuiytl87bfrqxoml4v83jto3wtzixsgkd3zn3095xbeckknvn84u2d8dgvbg24snncystk0zhvdl6aiwqvhqtox270ym7cdtvrr18w6hbknw0pi7q1d077gspxbzp2ujjpg3uwaxj0a2caf88plznt4ei0pabpgcweenz0cojev1anek5ael95nyw3owxru7rqgius6piv2ifaami7s09zuma8qila0048nsy1vk4jq2kuvpsqqkuv0d87c2bgig156ck3gudc3j9r5fquvpdun42i1vb43lr8g6amw4rw143egpuvrhewoo7nfrtqokmhynd81wkszefptvc46d76zv24tmqt1ahqmxx408maw9r3qljg0p6jf02pyex8g367jde4gcqf7g68cmlrzyu7il44w7wnf3o8vud8u065vws3xrmwhql6ke131q79e0x67xbk77h8b79ydhx79kajecbvchwv8e5pic5ojvtxjhqo9gbrfo91pfjna0natad239uoj5cgsmbpqok8na9ga9iylychevgomjwbk84wo8hwwibblterlkmwkdsb5on8ecv75gt8jiri8wme8jf3584m3o5xl8j5x1v7uweuk0dadh8ddywokpk43utorhpqt64vs3wxl0bhmvlkbbl4xmxnjljk424fewfoy78eu7fgrfp1yz7tzy8qa4b22lg3t68ybr2u69f9oim5yle49jql7sa7vd21jo6toxfp5fjifg8wih3bc0oesqu5eq4p52sycsalvc1hc8bhmj1ndxvd5mx4iyh8noxp63hrypfd1r75gije0ka7g82vklyu5pkogiv3yk35u8rq5eq90nxxnb58orf9qjygfgonxu1ghof5bcrbky853y6yn2ts3s014pinihr18blpssfmcxwduzfc50dukkmaeozlgzsedpvv4tvqkcvzridpypjzjb1c7yek6zcsi46phcp27fhpqtm0pchp1n6uqby2daa88bct4wclcrqa0azw5gwqjjkob5dbg7626q6aw8squ8zexgcuqqy36v5ddhw6xn45pbrmt827n7s4hl4zpq3t6adjnl6r9rax6qp4nmmib36r9f416r1poa97yyqltfr8ucnxxsrmgt5105hq9jrxpqv5nfchax4mj77jjys76q5ore42t8krjfhx2koqrwl0dmepzqu5i6s8nia67m18l2jzh5c018l22pseh3l92qdybys8qswfnjxg1bbf4f4ao1ekf2zqo3060a3mu1nbxru3s84yu9wsjooqdy2u3zzte35u0pdgfz9rasuzmamwfuixd743gdufwlodq9jjjezb8am63kkb74k5ay02cksqi00jh8fdt5db6lcxx2btehu2yd0ximolt03rk5x0w16o9iace2un4aaamc3a50r09gf3z92yxv07ewzso0g9oz3upgau7nfgfhdnu9y0d7lrnnoaxw941vccz1xk6cv7n2el5urgj76t7wazm66m6fz5t5008b3fq748lqw35tr633d85i2k8rbkfh0vhulifjhuu2cmf44igre92unjscschvj31ydt2oq344bqa5ttyjxovlfxqf7lk9crdvkjwet46p4f1qyrfcll1xceccevp1q1n7dt9ujygzo5b2r0pcavwm7yd01zoj383nnoqa59gkxi5ein6b3ms156mau0nibg8ln8tcnyc3wxlaonawe6visaglc3vixhgvfblhpjbvnhsgy8zln2ubsfsi1okfi8gkmonjarhigf4xwgrbdafttvqmuj2v4zcl2kabfzd3mas1cwsrub7z7vlt152c6q57dkpervhvcxf72irnbrk7pqcha755edluheiicvs1svg4i5dwf5ai4pjm8d44ef50nkr03rvyzm3uhe7kr7lh8rif1gmxui9sbgbrjbl473rc5e90xxao3d35d584dre6kjk6v3buco1zfs0uscshe87p77vygh9qwg3xqjt02hs6kaptj45941m0pnyjfhp79q5cuhfahjik87tvke1dqcnhg712glse3brey9yqh630xud0vqxxd0jutvymn7fsv3edlnelct3hvfdxtex1vu7h33p5hfwsyb3crlgtuyphugv6uo4b86974nllwo5s103srkopbohcvhg7c9ucz5eji4yn5n9f5s41o8dcexwinf7r93jmg5feinfqzjuy8jrhc627fdfn3c3lvia1nniqxhr3awbctrvfoqigja4ofz1b4lgywar30j7t5nlzu6xsm5hqqvopr0qiuxzcrylpz3f01vmkoh8ovktm6saelcx3vi5dbw1wslf3k2me8m4q92w95zoj9rvcgh1i9ywdft3edduvzzhhvzh34u6ys8k17jyrkvj3zjpxpbnrz7wx9izr1r1mlxfmn4fdswlqdwhdgrd97nwiihhuwmr9nqmxb9q2kx5if8sgdm0mz5cnfd9z0suoklra8rb2hzc1v37m1g0gjmw798ndirqabc9qv5olrqj3splhyjskccjct1nb12jtpx7waloqrd5dik73nwwy0fzv67lertzfh2pn7wp8rp6ne2aplw76z7ks9tbh8wqx7xyctbttafn3xoge9cmnzek8knyzdboc3fwhsj7nx2elew8lpvw821nox2ltwn4nq0esl3jfj70g2niruns2v1ax4t81co7guke9hwt5ypxufoxlh0g7p8dqsnqsryg4rk2byccuw6us9ntpudwtwcl7j2ocb7rbuxig30pt6ux8c3grx786pzs27sqfhno2tykelwvh7tjp1l6ljxxguz30jt76d5ytszbgokb40n2e4xpy5rj8lx6ncimdqo2wsr6rq8yqdutvvyr5y4ktrbrnsxeqj9ts9l48bufleiatyq4zycfnqfq2wbhnctygafpqaki60qedubg0ucsrqxgcddnbdbbpd3jfp9rruypv4ufkg5ptqz1v7avkotbvyj4yjl0pu2ywjkf4rbeog344amn4bwcv1srrrq0xncl6vyou0xwfls6yi45xbfm7jsk9mbyrhl8krrdaxz4ws2h7ejs7n9ne50cav3cf464q1b65pb2zxwibb7mpblek3l5oifif319dsr0ze9y0j48jn0fy9b5vmn5av9t0o95dhixegkpwwozqo6cswpdk18ry05ej264bxi4blymr9cquv9qejkupoovxrv0g7wa0k0hzkjb8ev7egy4qc9byb5n5kiw7vie5v0hsnfhnp80qxnffoczblh4bjuqlg1ts6tmc5smydo98xb6705jkh11yidpjzx17hw1mk3g3u1k2cyrqky 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:48.552 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:48.553 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:48.811 { 00:07:48.811 "subsystems": [ 00:07:48.811 { 00:07:48.811 "subsystem": "bdev", 00:07:48.811 "config": [ 00:07:48.811 { 00:07:48.811 "params": { 00:07:48.811 "trtype": "pcie", 00:07:48.811 "traddr": "0000:00:10.0", 00:07:48.811 "name": "Nvme0" 00:07:48.811 }, 00:07:48.811 "method": "bdev_nvme_attach_controller" 00:07:48.811 }, 00:07:48.811 { 00:07:48.811 "method": "bdev_wait_for_examine" 00:07:48.811 } 00:07:48.811 ] 00:07:48.811 } 00:07:48.811 ] 00:07:48.811 } 00:07:48.811 [2024-11-29 16:43:12.348516] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:48.811 [2024-11-29 16:43:12.348633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74029 ] 00:07:48.811 [2024-11-29 16:43:12.475497] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:48.811 [2024-11-29 16:43:12.501689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.811 [2024-11-29 16:43:12.521612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.811 [2024-11-29 16:43:12.551767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.071  [2024-11-29T16:43:12.863Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:49.071 00:07:49.071 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:49.071 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:49.071 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:49.071 16:43:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:49.071 [2024-11-29 16:43:12.815753] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:49.071 [2024-11-29 16:43:12.815938] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74037 ] 00:07:49.071 { 00:07:49.071 "subsystems": [ 00:07:49.071 { 00:07:49.071 "subsystem": "bdev", 00:07:49.071 "config": [ 00:07:49.071 { 00:07:49.071 "params": { 00:07:49.071 "trtype": "pcie", 00:07:49.071 "traddr": "0000:00:10.0", 00:07:49.071 "name": "Nvme0" 00:07:49.071 }, 00:07:49.071 "method": "bdev_nvme_attach_controller" 00:07:49.071 }, 00:07:49.071 { 00:07:49.071 "method": "bdev_wait_for_examine" 00:07:49.071 } 00:07:49.071 ] 00:07:49.071 } 00:07:49.071 ] 00:07:49.071 } 00:07:49.330 [2024-11-29 16:43:12.937656] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.330 [2024-11-29 16:43:12.962629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.330 [2024-11-29 16:43:12.981495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.330 [2024-11-29 16:43:13.009810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.330  [2024-11-29T16:43:13.381Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:49.589 00:07:49.589 16:43:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:49.590 16:43:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ baa0p6fvc0zmwutwtj8yeed1gzz2hh3a7ym7qekqrbh9o9jnu9qe5oxuy2rg7d84l4u9dynbqqhbwfu2m01innunixboqxt35bn4stg38luhvdns86ttucw3hqmf9p1pb8xrf6pyk3p0slpdqcm1sza2a2mau1t4sim6uulji1uckksphp3x3t5jw0348tacw3jabojw65ohqii5it6mmb3fwebp8dot49erytbr30ii65nmgzhwf0jr59arysichr2skx62nsm9ih6ys5blnnz78z4sgcsn4xx26p11begimbx6mvvfflwhvm5am2ivp8xr9r7mgbnqnsrb8osyet5rsws10gfg87qenccpf0up9gl2cme2xcyyd61gzyeyein8ne0bh3il9fsn5yhxy3s0h78wb4fns5i1uk2boxjvaps464jqe28436xuurljmgbkkmgoknuauak1knpit2ec1bnkpsrofeek19541c4i9377jn4zt1c7a6kdxpd2dim1awpljsvv4cuta60s21m6zylniykisyenygfhmec7xbuahz33brvlhtdiyiunlgrgfor2v89jphupv3hzbc34zvhzg4pqa8mc2gy4ff3ajfrxb3x5cwuito1kx60qc2rgsajg00xkxqmzozp14z3ihem1kmbqynowjp0cb3nd8domx3jc8rdo3xtkphye281jzpwxbrpigy9pm0aif3f2zze8iliuiytl87bfrqxoml4v83jto3wtzixsgkd3zn3095xbeckknvn84u2d8dgvbg24snncystk0zhvdl6aiwqvhqtox270ym7cdtvrr18w6hbknw0pi7q1d077gspxbzp2ujjpg3uwaxj0a2caf88plznt4ei0pabpgcweenz0cojev1anek5ael95nyw3owxru7rqgius6piv2ifaami7s09zuma8qila0048nsy1vk4jq2kuvpsqqkuv0d87c2bgig156ck3gudc3j9r5fquvpdun42i1vb43lr8g6amw4rw143egpuvrhewoo7nfrtqokmhynd81wkszefptvc46d76zv24tmqt1ahqmxx408maw9r3qljg0p6jf02pyex8g367jde4gcqf7g68cmlrzyu7il44w7wnf3o8vud8u065vws3xrmwhql6ke131q79e0x67xbk77h8b79ydhx79kajecbvchwv8e5pic5ojvtxjhqo9gbrfo91pfjna0natad239uoj5cgsmbpqok8na9ga9iylychevgomjwbk84wo8hwwibblterlkmwkdsb5on8ecv75gt8jiri8wme8jf3584m3o5xl8j5x1v7uweuk0dadh8ddywokpk43utorhpqt64vs3wxl0bhmvlkbbl4xmxnjljk424fewfoy78eu7fgrfp1yz7tzy8qa4b22lg3t68ybr2u69f9oim5yle49jql7sa7vd21jo6toxfp5fjifg8wih3bc0oesqu5eq4p52sycsalvc1hc8bhmj1ndxvd5mx4iyh8noxp63hrypfd1r75gije0ka7g82vklyu5pkogiv3yk35u8rq5eq90nxxnb58orf9qjygfgonxu1ghof5bcrbky853y6yn2ts3s014pinihr18blpssfmcxwduzfc50dukkmaeozlgzsedpvv4tvqkcvzridpypjzjb1c7yek6zcsi46phcp27fhpqtm0pchp1n6uqby2daa88bct4wclcrqa0azw5gwqjjkob5dbg7626q6aw8squ8zexgcuqqy36v5ddhw6xn45pbrmt827n7s4hl4zpq3t6adjnl6r9rax6qp4nmmib36r9f416r1poa97yyqltfr8ucnxxsrmgt5105hq9jrxpqv5nfchax4mj77jjys76q5ore42t8krjfhx2koqrwl0dmepzqu5i6s8nia67m18l2jzh5c018l22pseh3l92qdybys8qswfnjxg1bbf4f4ao1ekf2zqo3060a3mu1nbxru3s84yu9wsjooqdy2u3zzte35u0pdgfz9rasuzmamwfuixd743gdufwlodq9jjjezb8am63kkb74k5ay02cksqi00jh8fdt5db6lcxx2btehu2yd0ximolt03rk5x0w16o9iace2un4aaamc3a50r09gf3z92yxv07ewzso0g9oz3upgau7nfgfhdnu9y0d7lrnnoaxw941vccz1xk6cv7n2el5urgj76t7wazm66m6fz5t5008b3fq748lqw35tr633d85i2k8rbkfh0vhulifjhuu2cmf44igre92unjscschvj31ydt2oq344bqa5ttyjxovlfxqf7lk9crdvkjwet46p4f1qyrfcll1xceccevp1q1n7dt9ujygzo5b2r0pcavwm7yd01zoj383nnoqa59gkxi5ein6b3ms156mau0nibg8ln8tcnyc3wxlaonawe6visaglc3vixhgvfblhpjbvnhsgy8zln2ubsfsi1okfi8gkmonjarhigf4xwgrbdafttvqmuj2v4zcl2kabfzd3mas1cwsrub7z7vlt152c6q57dkpervhvcxf72irnbrk7pqcha755edluheiicvs1svg4i5dwf5ai4pjm8d44ef50nkr03rvyzm3uhe7kr7lh8rif1gmxui9sbgbrjbl473rc5e90xxao3d35d584dre6kjk6v3buco1zfs0uscshe87p77vygh9qwg3xqjt02hs6kaptj45941m0pnyjfhp79q5cuhfahjik87tvke1dqcnhg712glse3brey9yqh630xud0vqxxd0jutvymn7fsv3edlnelct3hvfdxtex1vu7h33p5hfwsyb3crlgtuyphugv6uo4b86974nllwo5s103srkopbohcvhg7c9ucz5eji4yn5n9f5s41o8dcexwinf7r93jmg5feinfqzjuy8jrhc627fdfn3c3lvia1nniqxhr3awbctrvfoqigja4ofz1b4lgywar30j7t5nlzu6xsm5hqqvopr0qiuxzcrylpz3f01vmkoh8ovktm6saelcx3vi5dbw1wslf3k2me8m4q92w95zoj9rvcgh1i9ywdft3edduvzzhhvzh34u6ys8k17jyrkvj3zjpxpbnrz7wx9izr1r1mlxfmn4fdswlqdwhdgrd97nwiihhuwmr9nqmxb9q2kx5if8sgdm0mz5cnfd9z0suoklra8rb2hzc1v37m1g0gjmw798ndirqabc9qv5olrqj3splhyjskccjct1nb12jtpx7waloqrd5dik73nwwy0fzv67lertzfh2pn7wp8rp6ne2aplw76z7ks9tbh8wqx7xyctbttafn3xoge9cmnzek8knyzdboc3fwhsj7nx2elew8lpvw821nox2ltwn4nq0esl3jfj70g2niruns2v1ax4t81co7guke9hwt5ypxufoxlh0g7p8dqsnqsryg4rk2byccuw6us9ntpudwtwcl7j2ocb7rbuxig30pt6ux8c3grx786pzs27sqfhno2tykelwvh7tjp1l6ljxxguz30jt76d5ytszbgokb40n2e4xpy5rj8lx6ncimdqo2wsr6rq8yqdutvvyr5y4ktrbrnsxeqj9ts9l48bufleiatyq4zycfnqfq2wbhnctygafpqaki60qedubg0ucsrqxgcddnbdbbpd3jfp9rruypv4ufkg5ptqz1v7avkotbvyj4yjl0pu2ywjkf4rbeog344amn4bwcv1srrrq0xncl6vyou0xwfls6yi45xbfm7jsk9mbyrhl8krrdaxz4ws2h7ejs7n9ne50cav3cf464q1b65pb2zxwibb7mpblek3l5oifif319dsr0ze9y0j48jn0fy9b5vmn5av9t0o95dhixegkpwwozqo6cswpdk18ry05ej264bxi4blymr9cquv9qejkupoovxrv0g7wa0k0hzkjb8ev7egy4qc9byb5n5kiw7vie5v0hsnfhnp80qxnffoczblh4bjuqlg1ts6tmc5smydo98xb6705jkh11yidpjzx17hw1mk3g3u1k2cyrqky == \b\a\a\0\p\6\f\v\c\0\z\m\w\u\t\w\t\j\8\y\e\e\d\1\g\z\z\2\h\h\3\a\7\y\m\7\q\e\k\q\r\b\h\9\o\9\j\n\u\9\q\e\5\o\x\u\y\2\r\g\7\d\8\4\l\4\u\9\d\y\n\b\q\q\h\b\w\f\u\2\m\0\1\i\n\n\u\n\i\x\b\o\q\x\t\3\5\b\n\4\s\t\g\3\8\l\u\h\v\d\n\s\8\6\t\t\u\c\w\3\h\q\m\f\9\p\1\p\b\8\x\r\f\6\p\y\k\3\p\0\s\l\p\d\q\c\m\1\s\z\a\2\a\2\m\a\u\1\t\4\s\i\m\6\u\u\l\j\i\1\u\c\k\k\s\p\h\p\3\x\3\t\5\j\w\0\3\4\8\t\a\c\w\3\j\a\b\o\j\w\6\5\o\h\q\i\i\5\i\t\6\m\m\b\3\f\w\e\b\p\8\d\o\t\4\9\e\r\y\t\b\r\3\0\i\i\6\5\n\m\g\z\h\w\f\0\j\r\5\9\a\r\y\s\i\c\h\r\2\s\k\x\6\2\n\s\m\9\i\h\6\y\s\5\b\l\n\n\z\7\8\z\4\s\g\c\s\n\4\x\x\2\6\p\1\1\b\e\g\i\m\b\x\6\m\v\v\f\f\l\w\h\v\m\5\a\m\2\i\v\p\8\x\r\9\r\7\m\g\b\n\q\n\s\r\b\8\o\s\y\e\t\5\r\s\w\s\1\0\g\f\g\8\7\q\e\n\c\c\p\f\0\u\p\9\g\l\2\c\m\e\2\x\c\y\y\d\6\1\g\z\y\e\y\e\i\n\8\n\e\0\b\h\3\i\l\9\f\s\n\5\y\h\x\y\3\s\0\h\7\8\w\b\4\f\n\s\5\i\1\u\k\2\b\o\x\j\v\a\p\s\4\6\4\j\q\e\2\8\4\3\6\x\u\u\r\l\j\m\g\b\k\k\m\g\o\k\n\u\a\u\a\k\1\k\n\p\i\t\2\e\c\1\b\n\k\p\s\r\o\f\e\e\k\1\9\5\4\1\c\4\i\9\3\7\7\j\n\4\z\t\1\c\7\a\6\k\d\x\p\d\2\d\i\m\1\a\w\p\l\j\s\v\v\4\c\u\t\a\6\0\s\2\1\m\6\z\y\l\n\i\y\k\i\s\y\e\n\y\g\f\h\m\e\c\7\x\b\u\a\h\z\3\3\b\r\v\l\h\t\d\i\y\i\u\n\l\g\r\g\f\o\r\2\v\8\9\j\p\h\u\p\v\3\h\z\b\c\3\4\z\v\h\z\g\4\p\q\a\8\m\c\2\g\y\4\f\f\3\a\j\f\r\x\b\3\x\5\c\w\u\i\t\o\1\k\x\6\0\q\c\2\r\g\s\a\j\g\0\0\x\k\x\q\m\z\o\z\p\1\4\z\3\i\h\e\m\1\k\m\b\q\y\n\o\w\j\p\0\c\b\3\n\d\8\d\o\m\x\3\j\c\8\r\d\o\3\x\t\k\p\h\y\e\2\8\1\j\z\p\w\x\b\r\p\i\g\y\9\p\m\0\a\i\f\3\f\2\z\z\e\8\i\l\i\u\i\y\t\l\8\7\b\f\r\q\x\o\m\l\4\v\8\3\j\t\o\3\w\t\z\i\x\s\g\k\d\3\z\n\3\0\9\5\x\b\e\c\k\k\n\v\n\8\4\u\2\d\8\d\g\v\b\g\2\4\s\n\n\c\y\s\t\k\0\z\h\v\d\l\6\a\i\w\q\v\h\q\t\o\x\2\7\0\y\m\7\c\d\t\v\r\r\1\8\w\6\h\b\k\n\w\0\p\i\7\q\1\d\0\7\7\g\s\p\x\b\z\p\2\u\j\j\p\g\3\u\w\a\x\j\0\a\2\c\a\f\8\8\p\l\z\n\t\4\e\i\0\p\a\b\p\g\c\w\e\e\n\z\0\c\o\j\e\v\1\a\n\e\k\5\a\e\l\9\5\n\y\w\3\o\w\x\r\u\7\r\q\g\i\u\s\6\p\i\v\2\i\f\a\a\m\i\7\s\0\9\z\u\m\a\8\q\i\l\a\0\0\4\8\n\s\y\1\v\k\4\j\q\2\k\u\v\p\s\q\q\k\u\v\0\d\8\7\c\2\b\g\i\g\1\5\6\c\k\3\g\u\d\c\3\j\9\r\5\f\q\u\v\p\d\u\n\4\2\i\1\v\b\4\3\l\r\8\g\6\a\m\w\4\r\w\1\4\3\e\g\p\u\v\r\h\e\w\o\o\7\n\f\r\t\q\o\k\m\h\y\n\d\8\1\w\k\s\z\e\f\p\t\v\c\4\6\d\7\6\z\v\2\4\t\m\q\t\1\a\h\q\m\x\x\4\0\8\m\a\w\9\r\3\q\l\j\g\0\p\6\j\f\0\2\p\y\e\x\8\g\3\6\7\j\d\e\4\g\c\q\f\7\g\6\8\c\m\l\r\z\y\u\7\i\l\4\4\w\7\w\n\f\3\o\8\v\u\d\8\u\0\6\5\v\w\s\3\x\r\m\w\h\q\l\6\k\e\1\3\1\q\7\9\e\0\x\6\7\x\b\k\7\7\h\8\b\7\9\y\d\h\x\7\9\k\a\j\e\c\b\v\c\h\w\v\8\e\5\p\i\c\5\o\j\v\t\x\j\h\q\o\9\g\b\r\f\o\9\1\p\f\j\n\a\0\n\a\t\a\d\2\3\9\u\o\j\5\c\g\s\m\b\p\q\o\k\8\n\a\9\g\a\9\i\y\l\y\c\h\e\v\g\o\m\j\w\b\k\8\4\w\o\8\h\w\w\i\b\b\l\t\e\r\l\k\m\w\k\d\s\b\5\o\n\8\e\c\v\7\5\g\t\8\j\i\r\i\8\w\m\e\8\j\f\3\5\8\4\m\3\o\5\x\l\8\j\5\x\1\v\7\u\w\e\u\k\0\d\a\d\h\8\d\d\y\w\o\k\p\k\4\3\u\t\o\r\h\p\q\t\6\4\v\s\3\w\x\l\0\b\h\m\v\l\k\b\b\l\4\x\m\x\n\j\l\j\k\4\2\4\f\e\w\f\o\y\7\8\e\u\7\f\g\r\f\p\1\y\z\7\t\z\y\8\q\a\4\b\2\2\l\g\3\t\6\8\y\b\r\2\u\6\9\f\9\o\i\m\5\y\l\e\4\9\j\q\l\7\s\a\7\v\d\2\1\j\o\6\t\o\x\f\p\5\f\j\i\f\g\8\w\i\h\3\b\c\0\o\e\s\q\u\5\e\q\4\p\5\2\s\y\c\s\a\l\v\c\1\h\c\8\b\h\m\j\1\n\d\x\v\d\5\m\x\4\i\y\h\8\n\o\x\p\6\3\h\r\y\p\f\d\1\r\7\5\g\i\j\e\0\k\a\7\g\8\2\v\k\l\y\u\5\p\k\o\g\i\v\3\y\k\3\5\u\8\r\q\5\e\q\9\0\n\x\x\n\b\5\8\o\r\f\9\q\j\y\g\f\g\o\n\x\u\1\g\h\o\f\5\b\c\r\b\k\y\8\5\3\y\6\y\n\2\t\s\3\s\0\1\4\p\i\n\i\h\r\1\8\b\l\p\s\s\f\m\c\x\w\d\u\z\f\c\5\0\d\u\k\k\m\a\e\o\z\l\g\z\s\e\d\p\v\v\4\t\v\q\k\c\v\z\r\i\d\p\y\p\j\z\j\b\1\c\7\y\e\k\6\z\c\s\i\4\6\p\h\c\p\2\7\f\h\p\q\t\m\0\p\c\h\p\1\n\6\u\q\b\y\2\d\a\a\8\8\b\c\t\4\w\c\l\c\r\q\a\0\a\z\w\5\g\w\q\j\j\k\o\b\5\d\b\g\7\6\2\6\q\6\a\w\8\s\q\u\8\z\e\x\g\c\u\q\q\y\3\6\v\5\d\d\h\w\6\x\n\4\5\p\b\r\m\t\8\2\7\n\7\s\4\h\l\4\z\p\q\3\t\6\a\d\j\n\l\6\r\9\r\a\x\6\q\p\4\n\m\m\i\b\3\6\r\9\f\4\1\6\r\1\p\o\a\9\7\y\y\q\l\t\f\r\8\u\c\n\x\x\s\r\m\g\t\5\1\0\5\h\q\9\j\r\x\p\q\v\5\n\f\c\h\a\x\4\m\j\7\7\j\j\y\s\7\6\q\5\o\r\e\4\2\t\8\k\r\j\f\h\x\2\k\o\q\r\w\l\0\d\m\e\p\z\q\u\5\i\6\s\8\n\i\a\6\7\m\1\8\l\2\j\z\h\5\c\0\1\8\l\2\2\p\s\e\h\3\l\9\2\q\d\y\b\y\s\8\q\s\w\f\n\j\x\g\1\b\b\f\4\f\4\a\o\1\e\k\f\2\z\q\o\3\0\6\0\a\3\m\u\1\n\b\x\r\u\3\s\8\4\y\u\9\w\s\j\o\o\q\d\y\2\u\3\z\z\t\e\3\5\u\0\p\d\g\f\z\9\r\a\s\u\z\m\a\m\w\f\u\i\x\d\7\4\3\g\d\u\f\w\l\o\d\q\9\j\j\j\e\z\b\8\a\m\6\3\k\k\b\7\4\k\5\a\y\0\2\c\k\s\q\i\0\0\j\h\8\f\d\t\5\d\b\6\l\c\x\x\2\b\t\e\h\u\2\y\d\0\x\i\m\o\l\t\0\3\r\k\5\x\0\w\1\6\o\9\i\a\c\e\2\u\n\4\a\a\a\m\c\3\a\5\0\r\0\9\g\f\3\z\9\2\y\x\v\0\7\e\w\z\s\o\0\g\9\o\z\3\u\p\g\a\u\7\n\f\g\f\h\d\n\u\9\y\0\d\7\l\r\n\n\o\a\x\w\9\4\1\v\c\c\z\1\x\k\6\c\v\7\n\2\e\l\5\u\r\g\j\7\6\t\7\w\a\z\m\6\6\m\6\f\z\5\t\5\0\0\8\b\3\f\q\7\4\8\l\q\w\3\5\t\r\6\3\3\d\8\5\i\2\k\8\r\b\k\f\h\0\v\h\u\l\i\f\j\h\u\u\2\c\m\f\4\4\i\g\r\e\9\2\u\n\j\s\c\s\c\h\v\j\3\1\y\d\t\2\o\q\3\4\4\b\q\a\5\t\t\y\j\x\o\v\l\f\x\q\f\7\l\k\9\c\r\d\v\k\j\w\e\t\4\6\p\4\f\1\q\y\r\f\c\l\l\1\x\c\e\c\c\e\v\p\1\q\1\n\7\d\t\9\u\j\y\g\z\o\5\b\2\r\0\p\c\a\v\w\m\7\y\d\0\1\z\o\j\3\8\3\n\n\o\q\a\5\9\g\k\x\i\5\e\i\n\6\b\3\m\s\1\5\6\m\a\u\0\n\i\b\g\8\l\n\8\t\c\n\y\c\3\w\x\l\a\o\n\a\w\e\6\v\i\s\a\g\l\c\3\v\i\x\h\g\v\f\b\l\h\p\j\b\v\n\h\s\g\y\8\z\l\n\2\u\b\s\f\s\i\1\o\k\f\i\8\g\k\m\o\n\j\a\r\h\i\g\f\4\x\w\g\r\b\d\a\f\t\t\v\q\m\u\j\2\v\4\z\c\l\2\k\a\b\f\z\d\3\m\a\s\1\c\w\s\r\u\b\7\z\7\v\l\t\1\5\2\c\6\q\5\7\d\k\p\e\r\v\h\v\c\x\f\7\2\i\r\n\b\r\k\7\p\q\c\h\a\7\5\5\e\d\l\u\h\e\i\i\c\v\s\1\s\v\g\4\i\5\d\w\f\5\a\i\4\p\j\m\8\d\4\4\e\f\5\0\n\k\r\0\3\r\v\y\z\m\3\u\h\e\7\k\r\7\l\h\8\r\i\f\1\g\m\x\u\i\9\s\b\g\b\r\j\b\l\4\7\3\r\c\5\e\9\0\x\x\a\o\3\d\3\5\d\5\8\4\d\r\e\6\k\j\k\6\v\3\b\u\c\o\1\z\f\s\0\u\s\c\s\h\e\8\7\p\7\7\v\y\g\h\9\q\w\g\3\x\q\j\t\0\2\h\s\6\k\a\p\t\j\4\5\9\4\1\m\0\p\n\y\j\f\h\p\7\9\q\5\c\u\h\f\a\h\j\i\k\8\7\t\v\k\e\1\d\q\c\n\h\g\7\1\2\g\l\s\e\3\b\r\e\y\9\y\q\h\6\3\0\x\u\d\0\v\q\x\x\d\0\j\u\t\v\y\m\n\7\f\s\v\3\e\d\l\n\e\l\c\t\3\h\v\f\d\x\t\e\x\1\v\u\7\h\3\3\p\5\h\f\w\s\y\b\3\c\r\l\g\t\u\y\p\h\u\g\v\6\u\o\4\b\8\6\9\7\4\n\l\l\w\o\5\s\1\0\3\s\r\k\o\p\b\o\h\c\v\h\g\7\c\9\u\c\z\5\e\j\i\4\y\n\5\n\9\f\5\s\4\1\o\8\d\c\e\x\w\i\n\f\7\r\9\3\j\m\g\5\f\e\i\n\f\q\z\j\u\y\8\j\r\h\c\6\2\7\f\d\f\n\3\c\3\l\v\i\a\1\n\n\i\q\x\h\r\3\a\w\b\c\t\r\v\f\o\q\i\g\j\a\4\o\f\z\1\b\4\l\g\y\w\a\r\3\0\j\7\t\5\n\l\z\u\6\x\s\m\5\h\q\q\v\o\p\r\0\q\i\u\x\z\c\r\y\l\p\z\3\f\0\1\v\m\k\o\h\8\o\v\k\t\m\6\s\a\e\l\c\x\3\v\i\5\d\b\w\1\w\s\l\f\3\k\2\m\e\8\m\4\q\9\2\w\9\5\z\o\j\9\r\v\c\g\h\1\i\9\y\w\d\f\t\3\e\d\d\u\v\z\z\h\h\v\z\h\3\4\u\6\y\s\8\k\1\7\j\y\r\k\v\j\3\z\j\p\x\p\b\n\r\z\7\w\x\9\i\z\r\1\r\1\m\l\x\f\m\n\4\f\d\s\w\l\q\d\w\h\d\g\r\d\9\7\n\w\i\i\h\h\u\w\m\r\9\n\q\m\x\b\9\q\2\k\x\5\i\f\8\s\g\d\m\0\m\z\5\c\n\f\d\9\z\0\s\u\o\k\l\r\a\8\r\b\2\h\z\c\1\v\3\7\m\1\g\0\g\j\m\w\7\9\8\n\d\i\r\q\a\b\c\9\q\v\5\o\l\r\q\j\3\s\p\l\h\y\j\s\k\c\c\j\c\t\1\n\b\1\2\j\t\p\x\7\w\a\l\o\q\r\d\5\d\i\k\7\3\n\w\w\y\0\f\z\v\6\7\l\e\r\t\z\f\h\2\p\n\7\w\p\8\r\p\6\n\e\2\a\p\l\w\7\6\z\7\k\s\9\t\b\h\8\w\q\x\7\x\y\c\t\b\t\t\a\f\n\3\x\o\g\e\9\c\m\n\z\e\k\8\k\n\y\z\d\b\o\c\3\f\w\h\s\j\7\n\x\2\e\l\e\w\8\l\p\v\w\8\2\1\n\o\x\2\l\t\w\n\4\n\q\0\e\s\l\3\j\f\j\7\0\g\2\n\i\r\u\n\s\2\v\1\a\x\4\t\8\1\c\o\7\g\u\k\e\9\h\w\t\5\y\p\x\u\f\o\x\l\h\0\g\7\p\8\d\q\s\n\q\s\r\y\g\4\r\k\2\b\y\c\c\u\w\6\u\s\9\n\t\p\u\d\w\t\w\c\l\7\j\2\o\c\b\7\r\b\u\x\i\g\3\0\p\t\6\u\x\8\c\3\g\r\x\7\8\6\p\z\s\2\7\s\q\f\h\n\o\2\t\y\k\e\l\w\v\h\7\t\j\p\1\l\6\l\j\x\x\g\u\z\3\0\j\t\7\6\d\5\y\t\s\z\b\g\o\k\b\4\0\n\2\e\4\x\p\y\5\r\j\8\l\x\6\n\c\i\m\d\q\o\2\w\s\r\6\r\q\8\y\q\d\u\t\v\v\y\r\5\y\4\k\t\r\b\r\n\s\x\e\q\j\9\t\s\9\l\4\8\b\u\f\l\e\i\a\t\y\q\4\z\y\c\f\n\q\f\q\2\w\b\h\n\c\t\y\g\a\f\p\q\a\k\i\6\0\q\e\d\u\b\g\0\u\c\s\r\q\x\g\c\d\d\n\b\d\b\b\p\d\3\j\f\p\9\r\r\u\y\p\v\4\u\f\k\g\5\p\t\q\z\1\v\7\a\v\k\o\t\b\v\y\j\4\y\j\l\0\p\u\2\y\w\j\k\f\4\r\b\e\o\g\3\4\4\a\m\n\4\b\w\c\v\1\s\r\r\r\q\0\x\n\c\l\6\v\y\o\u\0\x\w\f\l\s\6\y\i\4\5\x\b\f\m\7\j\s\k\9\m\b\y\r\h\l\8\k\r\r\d\a\x\z\4\w\s\2\h\7\e\j\s\7\n\9\n\e\5\0\c\a\v\3\c\f\4\6\4\q\1\b\6\5\p\b\2\z\x\w\i\b\b\7\m\p\b\l\e\k\3\l\5\o\i\f\i\f\3\1\9\d\s\r\0\z\e\9\y\0\j\4\8\j\n\0\f\y\9\b\5\v\m\n\5\a\v\9\t\0\o\9\5\d\h\i\x\e\g\k\p\w\w\o\z\q\o\6\c\s\w\p\d\k\1\8\r\y\0\5\e\j\2\6\4\b\x\i\4\b\l\y\m\r\9\c\q\u\v\9\q\e\j\k\u\p\o\o\v\x\r\v\0\g\7\w\a\0\k\0\h\z\k\j\b\8\e\v\7\e\g\y\4\q\c\9\b\y\b\5\n\5\k\i\w\7\v\i\e\5\v\0\h\s\n\f\h\n\p\8\0\q\x\n\f\f\o\c\z\b\l\h\4\b\j\u\q\l\g\1\t\s\6\t\m\c\5\s\m\y\d\o\9\8\x\b\6\7\0\5\j\k\h\1\1\y\i\d\p\j\z\x\1\7\h\w\1\m\k\3\g\3\u\1\k\2\c\y\r\q\k\y ]] 00:07:49.590 00:07:49.590 real 0m0.989s 00:07:49.590 user 0m0.682s 00:07:49.590 sys 0m0.394s 00:07:49.590 ************************************ 00:07:49.590 END TEST dd_rw_offset 00:07:49.590 ************************************ 00:07:49.590 16:43:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.590 16:43:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:49.590 16:43:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:49.590 16:43:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:49.590 16:43:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:49.590 16:43:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:49.590 16:43:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:49.590 16:43:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:49.590 16:43:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:49.590 16:43:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:49.590 16:43:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:49.590 16:43:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:49.590 16:43:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.590 [2024-11-29 16:43:13.324774] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:49.590 [2024-11-29 16:43:13.324866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74072 ] 00:07:49.590 { 00:07:49.590 "subsystems": [ 00:07:49.590 { 00:07:49.590 "subsystem": "bdev", 00:07:49.590 "config": [ 00:07:49.590 { 00:07:49.590 "params": { 00:07:49.590 "trtype": "pcie", 00:07:49.590 "traddr": "0000:00:10.0", 00:07:49.590 "name": "Nvme0" 00:07:49.590 }, 00:07:49.590 "method": "bdev_nvme_attach_controller" 00:07:49.590 }, 00:07:49.590 { 00:07:49.590 "method": "bdev_wait_for_examine" 00:07:49.590 } 00:07:49.590 ] 00:07:49.590 } 00:07:49.590 ] 00:07:49.590 } 00:07:49.850 [2024-11-29 16:43:13.450287] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.850 [2024-11-29 16:43:13.476747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.850 [2024-11-29 16:43:13.496566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.850 [2024-11-29 16:43:13.524271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.850  [2024-11-29T16:43:13.901Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:50.109 00:07:50.109 16:43:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.109 00:07:50.109 real 0m14.113s 00:07:50.109 user 0m10.137s 00:07:50.109 sys 0m4.666s 00:07:50.109 16:43:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.109 16:43:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:50.109 ************************************ 00:07:50.109 END TEST spdk_dd_basic_rw 00:07:50.109 ************************************ 00:07:50.109 16:43:13 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:50.109 16:43:13 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.109 16:43:13 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.109 16:43:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:50.109 ************************************ 00:07:50.109 START TEST spdk_dd_posix 00:07:50.109 ************************************ 00:07:50.109 16:43:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:50.109 * Looking for test storage... 00:07:50.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:50.109 16:43:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:50.109 16:43:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:07:50.109 16:43:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:50.368 16:43:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:50.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.369 --rc genhtml_branch_coverage=1 00:07:50.369 --rc genhtml_function_coverage=1 00:07:50.369 --rc genhtml_legend=1 00:07:50.369 --rc geninfo_all_blocks=1 00:07:50.369 --rc geninfo_unexecuted_blocks=1 00:07:50.369 00:07:50.369 ' 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:50.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.369 --rc genhtml_branch_coverage=1 00:07:50.369 --rc genhtml_function_coverage=1 00:07:50.369 --rc genhtml_legend=1 00:07:50.369 --rc geninfo_all_blocks=1 00:07:50.369 --rc geninfo_unexecuted_blocks=1 00:07:50.369 00:07:50.369 ' 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:50.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.369 --rc genhtml_branch_coverage=1 00:07:50.369 --rc genhtml_function_coverage=1 00:07:50.369 --rc genhtml_legend=1 00:07:50.369 --rc geninfo_all_blocks=1 00:07:50.369 --rc geninfo_unexecuted_blocks=1 00:07:50.369 00:07:50.369 ' 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:50.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.369 --rc genhtml_branch_coverage=1 00:07:50.369 --rc genhtml_function_coverage=1 00:07:50.369 --rc genhtml_legend=1 00:07:50.369 --rc geninfo_all_blocks=1 00:07:50.369 --rc geninfo_unexecuted_blocks=1 00:07:50.369 00:07:50.369 ' 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:50.369 * First test run, liburing in use 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:50.369 ************************************ 00:07:50.369 START TEST dd_flag_append 00:07:50.369 ************************************ 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=gp1x9zzx3ftd6pdsqbxqnibnh0a1a3qa 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=jok0yca7u1failygwikhtm66pgf4el1q 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s gp1x9zzx3ftd6pdsqbxqnibnh0a1a3qa 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s jok0yca7u1failygwikhtm66pgf4el1q 00:07:50.369 16:43:13 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:50.369 [2024-11-29 16:43:14.043656] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:50.370 [2024-11-29 16:43:14.043754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74133 ] 00:07:50.629 [2024-11-29 16:43:14.169000] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.629 [2024-11-29 16:43:14.197505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.629 [2024-11-29 16:43:14.223657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.629 [2024-11-29 16:43:14.260477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.629  [2024-11-29T16:43:14.421Z] Copying: 32/32 [B] (average 31 kBps) 00:07:50.629 00:07:50.629 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ jok0yca7u1failygwikhtm66pgf4el1qgp1x9zzx3ftd6pdsqbxqnibnh0a1a3qa == \j\o\k\0\y\c\a\7\u\1\f\a\i\l\y\g\w\i\k\h\t\m\6\6\p\g\f\4\e\l\1\q\g\p\1\x\9\z\z\x\3\f\t\d\6\p\d\s\q\b\x\q\n\i\b\n\h\0\a\1\a\3\q\a ]] 00:07:50.629 00:07:50.629 real 0m0.422s 00:07:50.629 user 0m0.191s 00:07:50.629 sys 0m0.204s 00:07:50.629 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.629 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:50.629 ************************************ 00:07:50.629 END TEST dd_flag_append 00:07:50.629 ************************************ 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:50.888 ************************************ 00:07:50.888 START TEST dd_flag_directory 00:07:50.888 ************************************ 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.888 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.888 [2024-11-29 16:43:14.511004] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:50.888 [2024-11-29 16:43:14.511098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74167 ] 00:07:50.888 [2024-11-29 16:43:14.636220] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.888 [2024-11-29 16:43:14.665614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.148 [2024-11-29 16:43:14.691446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.148 [2024-11-29 16:43:14.725840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.148 [2024-11-29 16:43:14.744547] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:51.148 [2024-11-29 16:43:14.744613] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:51.148 [2024-11-29 16:43:14.744646] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.148 [2024-11-29 16:43:14.806285] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.148 16:43:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:51.148 [2024-11-29 16:43:14.916000] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:51.148 [2024-11-29 16:43:14.916096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74171 ] 00:07:51.416 [2024-11-29 16:43:15.041954] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:51.416 [2024-11-29 16:43:15.068689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.416 [2024-11-29 16:43:15.091202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.416 [2024-11-29 16:43:15.123063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.416 [2024-11-29 16:43:15.143083] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:51.416 [2024-11-29 16:43:15.143142] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:51.416 [2024-11-29 16:43:15.143175] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.676 [2024-11-29 16:43:15.208900] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.676 00:07:51.676 real 0m0.809s 00:07:51.676 user 0m0.394s 00:07:51.676 sys 0m0.207s 00:07:51.676 ************************************ 00:07:51.676 END TEST dd_flag_directory 00:07:51.676 ************************************ 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:51.676 ************************************ 00:07:51.676 START TEST dd_flag_nofollow 00:07:51.676 ************************************ 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.676 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.676 [2024-11-29 16:43:15.365740] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:51.676 [2024-11-29 16:43:15.365835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74205 ] 00:07:51.935 [2024-11-29 16:43:15.490872] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:51.935 [2024-11-29 16:43:15.515991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.935 [2024-11-29 16:43:15.535527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.935 [2024-11-29 16:43:15.561938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.935 [2024-11-29 16:43:15.578097] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:51.935 [2024-11-29 16:43:15.578162] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:51.935 [2024-11-29 16:43:15.578181] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.935 [2024-11-29 16:43:15.636393] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.935 16:43:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:52.195 [2024-11-29 16:43:15.732594] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:52.195 [2024-11-29 16:43:15.732699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74209 ] 00:07:52.195 [2024-11-29 16:43:15.851148] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.195 [2024-11-29 16:43:15.875367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.195 [2024-11-29 16:43:15.897039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.195 [2024-11-29 16:43:15.927934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.195 [2024-11-29 16:43:15.945638] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:52.195 [2024-11-29 16:43:15.945701] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:52.195 [2024-11-29 16:43:15.945734] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.454 [2024-11-29 16:43:16.007204] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:52.454 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:52.454 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:52.454 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:52.454 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:52.454 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:52.454 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:52.454 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:52.454 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:52.454 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:52.454 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.454 [2024-11-29 16:43:16.118752] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:52.454 [2024-11-29 16:43:16.118857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74211 ] 00:07:52.454 [2024-11-29 16:43:16.243947] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.713 [2024-11-29 16:43:16.272702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.713 [2024-11-29 16:43:16.297167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.713 [2024-11-29 16:43:16.334292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.713  [2024-11-29T16:43:16.505Z] Copying: 512/512 [B] (average 500 kBps) 00:07:52.713 00:07:52.713 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ czmsrr5p0eh09nfdy723nh2pl0iaqgjsrphbwvz7f1vf7lkzlswh9gl0k308o7w8247cz3f1fvwbt8b0lfefz1eas0znhj9xea15bfwlrlj6rzcv73d0kzgcf7eak6gzkyljzkp6ahjk5n5p8mi8iddai090qyfhm5ttjab4cifvpymig511kg2esjri88oquhwf66ve6hb1pnsepd9yilchpb6eks66hqsun5njzanmblyelkz8fc8msmojtwzaajxlw5wb583epye30sjejk3ca256c3ynbrkyw4svxb1o6t4zuvuiitqdrlslgeel4nqapsjzu1wl6xqjwja8cg87mgnxp4m6rlm7z9it4zwim8jk0tghmdawpkhoc1qxok261ezev5bi5pvle5k38yztxchx8efjuxl6et2j11os5eabswwuxcj2mr4e9oeghr8t80q0ak9yar9c5u9znuii7patyxubudsxyqadjlgl7uyte5xjgr4b5o0n7lni == \c\z\m\s\r\r\5\p\0\e\h\0\9\n\f\d\y\7\2\3\n\h\2\p\l\0\i\a\q\g\j\s\r\p\h\b\w\v\z\7\f\1\v\f\7\l\k\z\l\s\w\h\9\g\l\0\k\3\0\8\o\7\w\8\2\4\7\c\z\3\f\1\f\v\w\b\t\8\b\0\l\f\e\f\z\1\e\a\s\0\z\n\h\j\9\x\e\a\1\5\b\f\w\l\r\l\j\6\r\z\c\v\7\3\d\0\k\z\g\c\f\7\e\a\k\6\g\z\k\y\l\j\z\k\p\6\a\h\j\k\5\n\5\p\8\m\i\8\i\d\d\a\i\0\9\0\q\y\f\h\m\5\t\t\j\a\b\4\c\i\f\v\p\y\m\i\g\5\1\1\k\g\2\e\s\j\r\i\8\8\o\q\u\h\w\f\6\6\v\e\6\h\b\1\p\n\s\e\p\d\9\y\i\l\c\h\p\b\6\e\k\s\6\6\h\q\s\u\n\5\n\j\z\a\n\m\b\l\y\e\l\k\z\8\f\c\8\m\s\m\o\j\t\w\z\a\a\j\x\l\w\5\w\b\5\8\3\e\p\y\e\3\0\s\j\e\j\k\3\c\a\2\5\6\c\3\y\n\b\r\k\y\w\4\s\v\x\b\1\o\6\t\4\z\u\v\u\i\i\t\q\d\r\l\s\l\g\e\e\l\4\n\q\a\p\s\j\z\u\1\w\l\6\x\q\j\w\j\a\8\c\g\8\7\m\g\n\x\p\4\m\6\r\l\m\7\z\9\i\t\4\z\w\i\m\8\j\k\0\t\g\h\m\d\a\w\p\k\h\o\c\1\q\x\o\k\2\6\1\e\z\e\v\5\b\i\5\p\v\l\e\5\k\3\8\y\z\t\x\c\h\x\8\e\f\j\u\x\l\6\e\t\2\j\1\1\o\s\5\e\a\b\s\w\w\u\x\c\j\2\m\r\4\e\9\o\e\g\h\r\8\t\8\0\q\0\a\k\9\y\a\r\9\c\5\u\9\z\n\u\i\i\7\p\a\t\y\x\u\b\u\d\s\x\y\q\a\d\j\l\g\l\7\u\y\t\e\5\x\j\g\r\4\b\5\o\0\n\7\l\n\i ]] 00:07:52.713 00:07:52.713 real 0m1.161s 00:07:52.713 user 0m0.541s 00:07:52.713 sys 0m0.376s 00:07:52.713 ************************************ 00:07:52.713 END TEST dd_flag_nofollow 00:07:52.713 ************************************ 00:07:52.713 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.713 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:52.713 16:43:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:52.713 16:43:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.713 16:43:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.713 16:43:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:52.972 ************************************ 00:07:52.972 START TEST dd_flag_noatime 00:07:52.972 ************************************ 00:07:52.972 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:07:52.972 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:52.972 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:52.972 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:52.972 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:52.972 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:52.972 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.973 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732898596 00:07:52.973 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.973 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732898596 00:07:52.973 16:43:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:53.910 16:43:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:53.910 [2024-11-29 16:43:17.598015] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:53.910 [2024-11-29 16:43:17.598120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74259 ] 00:07:54.171 [2024-11-29 16:43:17.723829] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.171 [2024-11-29 16:43:17.750768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.171 [2024-11-29 16:43:17.780165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.171 [2024-11-29 16:43:17.821100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.171  [2024-11-29T16:43:17.963Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.171 00:07:54.171 16:43:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:54.481 16:43:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732898596 )) 00:07:54.482 16:43:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.482 16:43:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732898596 )) 00:07:54.482 16:43:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.482 [2024-11-29 16:43:18.019356] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:54.482 [2024-11-29 16:43:18.019444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74267 ] 00:07:54.482 [2024-11-29 16:43:18.144666] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.482 [2024-11-29 16:43:18.171505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.482 [2024-11-29 16:43:18.194235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.482 [2024-11-29 16:43:18.224385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.482  [2024-11-29T16:43:18.532Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.740 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732898598 )) 00:07:54.740 00:07:54.740 real 0m1.843s 00:07:54.740 user 0m0.398s 00:07:54.740 sys 0m0.392s 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:54.740 ************************************ 00:07:54.740 END TEST dd_flag_noatime 00:07:54.740 ************************************ 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:54.740 ************************************ 00:07:54.740 START TEST dd_flags_misc 00:07:54.740 ************************************ 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.740 16:43:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:54.740 [2024-11-29 16:43:18.469675] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:54.740 [2024-11-29 16:43:18.469761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74301 ] 00:07:54.999 [2024-11-29 16:43:18.594704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.999 [2024-11-29 16:43:18.622242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.999 [2024-11-29 16:43:18.642974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.000 [2024-11-29 16:43:18.673045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.000  [2024-11-29T16:43:19.051Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.259 00:07:55.259 16:43:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xulk1pvjmnc47ohizg76g8x6891mqslo4q10l2i4iylkrzhscd9panp2gcnuz85mg6vdyc3l6ginebhchs4ncv9d9guxf22l5k3fr3x1k4l9wxzok12f2hm4galthwu1gohx5ctvd14c34dzc1m18lgldl7ro3ria6vkx0va6s46ikp0k0atq1tqxxwg5yyxt141l00gkkk29ivno4j3fwg9bsx23xd8n3dxswyab66yl7f91t943e0ojhn1gnkjaokzmyvbgadeol7qumg0sfaiil2xp9wb0gm4e76qjovwz8tc7j6np36yjfeg6gbpnzy0a8oh1ybxqqvpyojybdvv9erlvjmkpv22apjviruj3z2wz48q3pt1rkfbvcjzge8i6ot1xjueahriobv7v7tn47pejezje3ppd57e56njjz9yta540iqea9rl87cyl6xwy0kw7gac3s0wft117lctt67trvkg55ourcxvy2sx9yazkppdhr9oxseg3x8w == \x\u\l\k\1\p\v\j\m\n\c\4\7\o\h\i\z\g\7\6\g\8\x\6\8\9\1\m\q\s\l\o\4\q\1\0\l\2\i\4\i\y\l\k\r\z\h\s\c\d\9\p\a\n\p\2\g\c\n\u\z\8\5\m\g\6\v\d\y\c\3\l\6\g\i\n\e\b\h\c\h\s\4\n\c\v\9\d\9\g\u\x\f\2\2\l\5\k\3\f\r\3\x\1\k\4\l\9\w\x\z\o\k\1\2\f\2\h\m\4\g\a\l\t\h\w\u\1\g\o\h\x\5\c\t\v\d\1\4\c\3\4\d\z\c\1\m\1\8\l\g\l\d\l\7\r\o\3\r\i\a\6\v\k\x\0\v\a\6\s\4\6\i\k\p\0\k\0\a\t\q\1\t\q\x\x\w\g\5\y\y\x\t\1\4\1\l\0\0\g\k\k\k\2\9\i\v\n\o\4\j\3\f\w\g\9\b\s\x\2\3\x\d\8\n\3\d\x\s\w\y\a\b\6\6\y\l\7\f\9\1\t\9\4\3\e\0\o\j\h\n\1\g\n\k\j\a\o\k\z\m\y\v\b\g\a\d\e\o\l\7\q\u\m\g\0\s\f\a\i\i\l\2\x\p\9\w\b\0\g\m\4\e\7\6\q\j\o\v\w\z\8\t\c\7\j\6\n\p\3\6\y\j\f\e\g\6\g\b\p\n\z\y\0\a\8\o\h\1\y\b\x\q\q\v\p\y\o\j\y\b\d\v\v\9\e\r\l\v\j\m\k\p\v\2\2\a\p\j\v\i\r\u\j\3\z\2\w\z\4\8\q\3\p\t\1\r\k\f\b\v\c\j\z\g\e\8\i\6\o\t\1\x\j\u\e\a\h\r\i\o\b\v\7\v\7\t\n\4\7\p\e\j\e\z\j\e\3\p\p\d\5\7\e\5\6\n\j\j\z\9\y\t\a\5\4\0\i\q\e\a\9\r\l\8\7\c\y\l\6\x\w\y\0\k\w\7\g\a\c\3\s\0\w\f\t\1\1\7\l\c\t\t\6\7\t\r\v\k\g\5\5\o\u\r\c\x\v\y\2\s\x\9\y\a\z\k\p\p\d\h\r\9\o\x\s\e\g\3\x\8\w ]] 00:07:55.259 16:43:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.259 16:43:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:55.259 [2024-11-29 16:43:18.861701] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:55.260 [2024-11-29 16:43:18.861798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74305 ] 00:07:55.260 [2024-11-29 16:43:18.987171] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:55.260 [2024-11-29 16:43:19.013537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.260 [2024-11-29 16:43:19.032956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.519 [2024-11-29 16:43:19.063579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.519  [2024-11-29T16:43:19.311Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.519 00:07:55.519 16:43:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xulk1pvjmnc47ohizg76g8x6891mqslo4q10l2i4iylkrzhscd9panp2gcnuz85mg6vdyc3l6ginebhchs4ncv9d9guxf22l5k3fr3x1k4l9wxzok12f2hm4galthwu1gohx5ctvd14c34dzc1m18lgldl7ro3ria6vkx0va6s46ikp0k0atq1tqxxwg5yyxt141l00gkkk29ivno4j3fwg9bsx23xd8n3dxswyab66yl7f91t943e0ojhn1gnkjaokzmyvbgadeol7qumg0sfaiil2xp9wb0gm4e76qjovwz8tc7j6np36yjfeg6gbpnzy0a8oh1ybxqqvpyojybdvv9erlvjmkpv22apjviruj3z2wz48q3pt1rkfbvcjzge8i6ot1xjueahriobv7v7tn47pejezje3ppd57e56njjz9yta540iqea9rl87cyl6xwy0kw7gac3s0wft117lctt67trvkg55ourcxvy2sx9yazkppdhr9oxseg3x8w == \x\u\l\k\1\p\v\j\m\n\c\4\7\o\h\i\z\g\7\6\g\8\x\6\8\9\1\m\q\s\l\o\4\q\1\0\l\2\i\4\i\y\l\k\r\z\h\s\c\d\9\p\a\n\p\2\g\c\n\u\z\8\5\m\g\6\v\d\y\c\3\l\6\g\i\n\e\b\h\c\h\s\4\n\c\v\9\d\9\g\u\x\f\2\2\l\5\k\3\f\r\3\x\1\k\4\l\9\w\x\z\o\k\1\2\f\2\h\m\4\g\a\l\t\h\w\u\1\g\o\h\x\5\c\t\v\d\1\4\c\3\4\d\z\c\1\m\1\8\l\g\l\d\l\7\r\o\3\r\i\a\6\v\k\x\0\v\a\6\s\4\6\i\k\p\0\k\0\a\t\q\1\t\q\x\x\w\g\5\y\y\x\t\1\4\1\l\0\0\g\k\k\k\2\9\i\v\n\o\4\j\3\f\w\g\9\b\s\x\2\3\x\d\8\n\3\d\x\s\w\y\a\b\6\6\y\l\7\f\9\1\t\9\4\3\e\0\o\j\h\n\1\g\n\k\j\a\o\k\z\m\y\v\b\g\a\d\e\o\l\7\q\u\m\g\0\s\f\a\i\i\l\2\x\p\9\w\b\0\g\m\4\e\7\6\q\j\o\v\w\z\8\t\c\7\j\6\n\p\3\6\y\j\f\e\g\6\g\b\p\n\z\y\0\a\8\o\h\1\y\b\x\q\q\v\p\y\o\j\y\b\d\v\v\9\e\r\l\v\j\m\k\p\v\2\2\a\p\j\v\i\r\u\j\3\z\2\w\z\4\8\q\3\p\t\1\r\k\f\b\v\c\j\z\g\e\8\i\6\o\t\1\x\j\u\e\a\h\r\i\o\b\v\7\v\7\t\n\4\7\p\e\j\e\z\j\e\3\p\p\d\5\7\e\5\6\n\j\j\z\9\y\t\a\5\4\0\i\q\e\a\9\r\l\8\7\c\y\l\6\x\w\y\0\k\w\7\g\a\c\3\s\0\w\f\t\1\1\7\l\c\t\t\6\7\t\r\v\k\g\5\5\o\u\r\c\x\v\y\2\s\x\9\y\a\z\k\p\p\d\h\r\9\o\x\s\e\g\3\x\8\w ]] 00:07:55.519 16:43:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.519 16:43:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:55.519 [2024-11-29 16:43:19.248609] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:55.519 [2024-11-29 16:43:19.248707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74309 ] 00:07:55.778 [2024-11-29 16:43:19.373811] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:55.778 [2024-11-29 16:43:19.403421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.778 [2024-11-29 16:43:19.428080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.778 [2024-11-29 16:43:19.459909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.778  [2024-11-29T16:43:19.835Z] Copying: 512/512 [B] (average 166 kBps) 00:07:56.043 00:07:56.043 16:43:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xulk1pvjmnc47ohizg76g8x6891mqslo4q10l2i4iylkrzhscd9panp2gcnuz85mg6vdyc3l6ginebhchs4ncv9d9guxf22l5k3fr3x1k4l9wxzok12f2hm4galthwu1gohx5ctvd14c34dzc1m18lgldl7ro3ria6vkx0va6s46ikp0k0atq1tqxxwg5yyxt141l00gkkk29ivno4j3fwg9bsx23xd8n3dxswyab66yl7f91t943e0ojhn1gnkjaokzmyvbgadeol7qumg0sfaiil2xp9wb0gm4e76qjovwz8tc7j6np36yjfeg6gbpnzy0a8oh1ybxqqvpyojybdvv9erlvjmkpv22apjviruj3z2wz48q3pt1rkfbvcjzge8i6ot1xjueahriobv7v7tn47pejezje3ppd57e56njjz9yta540iqea9rl87cyl6xwy0kw7gac3s0wft117lctt67trvkg55ourcxvy2sx9yazkppdhr9oxseg3x8w == \x\u\l\k\1\p\v\j\m\n\c\4\7\o\h\i\z\g\7\6\g\8\x\6\8\9\1\m\q\s\l\o\4\q\1\0\l\2\i\4\i\y\l\k\r\z\h\s\c\d\9\p\a\n\p\2\g\c\n\u\z\8\5\m\g\6\v\d\y\c\3\l\6\g\i\n\e\b\h\c\h\s\4\n\c\v\9\d\9\g\u\x\f\2\2\l\5\k\3\f\r\3\x\1\k\4\l\9\w\x\z\o\k\1\2\f\2\h\m\4\g\a\l\t\h\w\u\1\g\o\h\x\5\c\t\v\d\1\4\c\3\4\d\z\c\1\m\1\8\l\g\l\d\l\7\r\o\3\r\i\a\6\v\k\x\0\v\a\6\s\4\6\i\k\p\0\k\0\a\t\q\1\t\q\x\x\w\g\5\y\y\x\t\1\4\1\l\0\0\g\k\k\k\2\9\i\v\n\o\4\j\3\f\w\g\9\b\s\x\2\3\x\d\8\n\3\d\x\s\w\y\a\b\6\6\y\l\7\f\9\1\t\9\4\3\e\0\o\j\h\n\1\g\n\k\j\a\o\k\z\m\y\v\b\g\a\d\e\o\l\7\q\u\m\g\0\s\f\a\i\i\l\2\x\p\9\w\b\0\g\m\4\e\7\6\q\j\o\v\w\z\8\t\c\7\j\6\n\p\3\6\y\j\f\e\g\6\g\b\p\n\z\y\0\a\8\o\h\1\y\b\x\q\q\v\p\y\o\j\y\b\d\v\v\9\e\r\l\v\j\m\k\p\v\2\2\a\p\j\v\i\r\u\j\3\z\2\w\z\4\8\q\3\p\t\1\r\k\f\b\v\c\j\z\g\e\8\i\6\o\t\1\x\j\u\e\a\h\r\i\o\b\v\7\v\7\t\n\4\7\p\e\j\e\z\j\e\3\p\p\d\5\7\e\5\6\n\j\j\z\9\y\t\a\5\4\0\i\q\e\a\9\r\l\8\7\c\y\l\6\x\w\y\0\k\w\7\g\a\c\3\s\0\w\f\t\1\1\7\l\c\t\t\6\7\t\r\v\k\g\5\5\o\u\r\c\x\v\y\2\s\x\9\y\a\z\k\p\p\d\h\r\9\o\x\s\e\g\3\x\8\w ]] 00:07:56.043 16:43:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.043 16:43:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:56.043 [2024-11-29 16:43:19.657343] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:56.043 [2024-11-29 16:43:19.657443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74324 ] 00:07:56.043 [2024-11-29 16:43:19.782083] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:56.043 [2024-11-29 16:43:19.808533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.302 [2024-11-29 16:43:19.834124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.303 [2024-11-29 16:43:19.869673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.303  [2024-11-29T16:43:20.095Z] Copying: 512/512 [B] (average 250 kBps) 00:07:56.303 00:07:56.303 16:43:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xulk1pvjmnc47ohizg76g8x6891mqslo4q10l2i4iylkrzhscd9panp2gcnuz85mg6vdyc3l6ginebhchs4ncv9d9guxf22l5k3fr3x1k4l9wxzok12f2hm4galthwu1gohx5ctvd14c34dzc1m18lgldl7ro3ria6vkx0va6s46ikp0k0atq1tqxxwg5yyxt141l00gkkk29ivno4j3fwg9bsx23xd8n3dxswyab66yl7f91t943e0ojhn1gnkjaokzmyvbgadeol7qumg0sfaiil2xp9wb0gm4e76qjovwz8tc7j6np36yjfeg6gbpnzy0a8oh1ybxqqvpyojybdvv9erlvjmkpv22apjviruj3z2wz48q3pt1rkfbvcjzge8i6ot1xjueahriobv7v7tn47pejezje3ppd57e56njjz9yta540iqea9rl87cyl6xwy0kw7gac3s0wft117lctt67trvkg55ourcxvy2sx9yazkppdhr9oxseg3x8w == \x\u\l\k\1\p\v\j\m\n\c\4\7\o\h\i\z\g\7\6\g\8\x\6\8\9\1\m\q\s\l\o\4\q\1\0\l\2\i\4\i\y\l\k\r\z\h\s\c\d\9\p\a\n\p\2\g\c\n\u\z\8\5\m\g\6\v\d\y\c\3\l\6\g\i\n\e\b\h\c\h\s\4\n\c\v\9\d\9\g\u\x\f\2\2\l\5\k\3\f\r\3\x\1\k\4\l\9\w\x\z\o\k\1\2\f\2\h\m\4\g\a\l\t\h\w\u\1\g\o\h\x\5\c\t\v\d\1\4\c\3\4\d\z\c\1\m\1\8\l\g\l\d\l\7\r\o\3\r\i\a\6\v\k\x\0\v\a\6\s\4\6\i\k\p\0\k\0\a\t\q\1\t\q\x\x\w\g\5\y\y\x\t\1\4\1\l\0\0\g\k\k\k\2\9\i\v\n\o\4\j\3\f\w\g\9\b\s\x\2\3\x\d\8\n\3\d\x\s\w\y\a\b\6\6\y\l\7\f\9\1\t\9\4\3\e\0\o\j\h\n\1\g\n\k\j\a\o\k\z\m\y\v\b\g\a\d\e\o\l\7\q\u\m\g\0\s\f\a\i\i\l\2\x\p\9\w\b\0\g\m\4\e\7\6\q\j\o\v\w\z\8\t\c\7\j\6\n\p\3\6\y\j\f\e\g\6\g\b\p\n\z\y\0\a\8\o\h\1\y\b\x\q\q\v\p\y\o\j\y\b\d\v\v\9\e\r\l\v\j\m\k\p\v\2\2\a\p\j\v\i\r\u\j\3\z\2\w\z\4\8\q\3\p\t\1\r\k\f\b\v\c\j\z\g\e\8\i\6\o\t\1\x\j\u\e\a\h\r\i\o\b\v\7\v\7\t\n\4\7\p\e\j\e\z\j\e\3\p\p\d\5\7\e\5\6\n\j\j\z\9\y\t\a\5\4\0\i\q\e\a\9\r\l\8\7\c\y\l\6\x\w\y\0\k\w\7\g\a\c\3\s\0\w\f\t\1\1\7\l\c\t\t\6\7\t\r\v\k\g\5\5\o\u\r\c\x\v\y\2\s\x\9\y\a\z\k\p\p\d\h\r\9\o\x\s\e\g\3\x\8\w ]] 00:07:56.303 16:43:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:56.303 16:43:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:56.303 16:43:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:56.303 16:43:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:56.303 16:43:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.303 16:43:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:56.303 [2024-11-29 16:43:20.077116] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:56.303 [2024-11-29 16:43:20.077220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74328 ] 00:07:56.562 [2024-11-29 16:43:20.202142] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:56.562 [2024-11-29 16:43:20.226381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.562 [2024-11-29 16:43:20.245479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.562 [2024-11-29 16:43:20.272685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.562  [2024-11-29T16:43:20.613Z] Copying: 512/512 [B] (average 500 kBps) 00:07:56.821 00:07:56.821 16:43:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 6x115i8e6nxi2lxfe79977r5h76y0xp6if71m5j6j8fk7t9ukxp9amn1x1z4yft0s7l64ku0es0qz0sf4dx4b7gats37a6q2lywl0b6adsuexxsolpyoeaso9zbmus5vlz8ted970lzjusoor03bqbcvbf6wv3dq7ewed1j3mw36k9j2gnz67lk38nuvnlhzp6iljf0hjegwuswxs9fei1o23z759bcf420rreyiypbd42lzplikhs76scrlftgb51o1gcvzcwvweho1eezrwwzygplxvi2vtuoakgr2x8ectyoqgg6kfdpt6qimk8hjq2sxy1nseygme1uxu2m37a54djylobdg24omeb6rhu40a0wr3jc7nk8244yfmpuqwrs9r76ni8itds1kbo0f62vr1ll4dkjm97pkzpsru7l5vdnwa4uo9xp9o8juq5cq8jfsirdqnzbul4zxnb5r4ysrc6qd6rpsfz2q17nxchsdsdopzqb3wi531k2d52hs == \6\x\1\1\5\i\8\e\6\n\x\i\2\l\x\f\e\7\9\9\7\7\r\5\h\7\6\y\0\x\p\6\i\f\7\1\m\5\j\6\j\8\f\k\7\t\9\u\k\x\p\9\a\m\n\1\x\1\z\4\y\f\t\0\s\7\l\6\4\k\u\0\e\s\0\q\z\0\s\f\4\d\x\4\b\7\g\a\t\s\3\7\a\6\q\2\l\y\w\l\0\b\6\a\d\s\u\e\x\x\s\o\l\p\y\o\e\a\s\o\9\z\b\m\u\s\5\v\l\z\8\t\e\d\9\7\0\l\z\j\u\s\o\o\r\0\3\b\q\b\c\v\b\f\6\w\v\3\d\q\7\e\w\e\d\1\j\3\m\w\3\6\k\9\j\2\g\n\z\6\7\l\k\3\8\n\u\v\n\l\h\z\p\6\i\l\j\f\0\h\j\e\g\w\u\s\w\x\s\9\f\e\i\1\o\2\3\z\7\5\9\b\c\f\4\2\0\r\r\e\y\i\y\p\b\d\4\2\l\z\p\l\i\k\h\s\7\6\s\c\r\l\f\t\g\b\5\1\o\1\g\c\v\z\c\w\v\w\e\h\o\1\e\e\z\r\w\w\z\y\g\p\l\x\v\i\2\v\t\u\o\a\k\g\r\2\x\8\e\c\t\y\o\q\g\g\6\k\f\d\p\t\6\q\i\m\k\8\h\j\q\2\s\x\y\1\n\s\e\y\g\m\e\1\u\x\u\2\m\3\7\a\5\4\d\j\y\l\o\b\d\g\2\4\o\m\e\b\6\r\h\u\4\0\a\0\w\r\3\j\c\7\n\k\8\2\4\4\y\f\m\p\u\q\w\r\s\9\r\7\6\n\i\8\i\t\d\s\1\k\b\o\0\f\6\2\v\r\1\l\l\4\d\k\j\m\9\7\p\k\z\p\s\r\u\7\l\5\v\d\n\w\a\4\u\o\9\x\p\9\o\8\j\u\q\5\c\q\8\j\f\s\i\r\d\q\n\z\b\u\l\4\z\x\n\b\5\r\4\y\s\r\c\6\q\d\6\r\p\s\f\z\2\q\1\7\n\x\c\h\s\d\s\d\o\p\z\q\b\3\w\i\5\3\1\k\2\d\5\2\h\s ]] 00:07:56.821 16:43:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.821 16:43:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:56.821 [2024-11-29 16:43:20.463076] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:56.821 [2024-11-29 16:43:20.463170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74339 ] 00:07:56.821 [2024-11-29 16:43:20.587620] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.081 [2024-11-29 16:43:20.616184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.081 [2024-11-29 16:43:20.640847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.081 [2024-11-29 16:43:20.671790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.081  [2024-11-29T16:43:20.873Z] Copying: 512/512 [B] (average 500 kBps) 00:07:57.081 00:07:57.081 16:43:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 6x115i8e6nxi2lxfe79977r5h76y0xp6if71m5j6j8fk7t9ukxp9amn1x1z4yft0s7l64ku0es0qz0sf4dx4b7gats37a6q2lywl0b6adsuexxsolpyoeaso9zbmus5vlz8ted970lzjusoor03bqbcvbf6wv3dq7ewed1j3mw36k9j2gnz67lk38nuvnlhzp6iljf0hjegwuswxs9fei1o23z759bcf420rreyiypbd42lzplikhs76scrlftgb51o1gcvzcwvweho1eezrwwzygplxvi2vtuoakgr2x8ectyoqgg6kfdpt6qimk8hjq2sxy1nseygme1uxu2m37a54djylobdg24omeb6rhu40a0wr3jc7nk8244yfmpuqwrs9r76ni8itds1kbo0f62vr1ll4dkjm97pkzpsru7l5vdnwa4uo9xp9o8juq5cq8jfsirdqnzbul4zxnb5r4ysrc6qd6rpsfz2q17nxchsdsdopzqb3wi531k2d52hs == \6\x\1\1\5\i\8\e\6\n\x\i\2\l\x\f\e\7\9\9\7\7\r\5\h\7\6\y\0\x\p\6\i\f\7\1\m\5\j\6\j\8\f\k\7\t\9\u\k\x\p\9\a\m\n\1\x\1\z\4\y\f\t\0\s\7\l\6\4\k\u\0\e\s\0\q\z\0\s\f\4\d\x\4\b\7\g\a\t\s\3\7\a\6\q\2\l\y\w\l\0\b\6\a\d\s\u\e\x\x\s\o\l\p\y\o\e\a\s\o\9\z\b\m\u\s\5\v\l\z\8\t\e\d\9\7\0\l\z\j\u\s\o\o\r\0\3\b\q\b\c\v\b\f\6\w\v\3\d\q\7\e\w\e\d\1\j\3\m\w\3\6\k\9\j\2\g\n\z\6\7\l\k\3\8\n\u\v\n\l\h\z\p\6\i\l\j\f\0\h\j\e\g\w\u\s\w\x\s\9\f\e\i\1\o\2\3\z\7\5\9\b\c\f\4\2\0\r\r\e\y\i\y\p\b\d\4\2\l\z\p\l\i\k\h\s\7\6\s\c\r\l\f\t\g\b\5\1\o\1\g\c\v\z\c\w\v\w\e\h\o\1\e\e\z\r\w\w\z\y\g\p\l\x\v\i\2\v\t\u\o\a\k\g\r\2\x\8\e\c\t\y\o\q\g\g\6\k\f\d\p\t\6\q\i\m\k\8\h\j\q\2\s\x\y\1\n\s\e\y\g\m\e\1\u\x\u\2\m\3\7\a\5\4\d\j\y\l\o\b\d\g\2\4\o\m\e\b\6\r\h\u\4\0\a\0\w\r\3\j\c\7\n\k\8\2\4\4\y\f\m\p\u\q\w\r\s\9\r\7\6\n\i\8\i\t\d\s\1\k\b\o\0\f\6\2\v\r\1\l\l\4\d\k\j\m\9\7\p\k\z\p\s\r\u\7\l\5\v\d\n\w\a\4\u\o\9\x\p\9\o\8\j\u\q\5\c\q\8\j\f\s\i\r\d\q\n\z\b\u\l\4\z\x\n\b\5\r\4\y\s\r\c\6\q\d\6\r\p\s\f\z\2\q\1\7\n\x\c\h\s\d\s\d\o\p\z\q\b\3\w\i\5\3\1\k\2\d\5\2\h\s ]] 00:07:57.081 16:43:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:57.081 16:43:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:57.081 [2024-11-29 16:43:20.868419] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:57.081 [2024-11-29 16:43:20.868514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74347 ] 00:07:57.340 [2024-11-29 16:43:20.993624] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.340 [2024-11-29 16:43:21.021256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.340 [2024-11-29 16:43:21.045487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.340 [2024-11-29 16:43:21.082480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.340  [2024-11-29T16:43:21.391Z] Copying: 512/512 [B] (average 250 kBps) 00:07:57.599 00:07:57.599 16:43:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 6x115i8e6nxi2lxfe79977r5h76y0xp6if71m5j6j8fk7t9ukxp9amn1x1z4yft0s7l64ku0es0qz0sf4dx4b7gats37a6q2lywl0b6adsuexxsolpyoeaso9zbmus5vlz8ted970lzjusoor03bqbcvbf6wv3dq7ewed1j3mw36k9j2gnz67lk38nuvnlhzp6iljf0hjegwuswxs9fei1o23z759bcf420rreyiypbd42lzplikhs76scrlftgb51o1gcvzcwvweho1eezrwwzygplxvi2vtuoakgr2x8ectyoqgg6kfdpt6qimk8hjq2sxy1nseygme1uxu2m37a54djylobdg24omeb6rhu40a0wr3jc7nk8244yfmpuqwrs9r76ni8itds1kbo0f62vr1ll4dkjm97pkzpsru7l5vdnwa4uo9xp9o8juq5cq8jfsirdqnzbul4zxnb5r4ysrc6qd6rpsfz2q17nxchsdsdopzqb3wi531k2d52hs == \6\x\1\1\5\i\8\e\6\n\x\i\2\l\x\f\e\7\9\9\7\7\r\5\h\7\6\y\0\x\p\6\i\f\7\1\m\5\j\6\j\8\f\k\7\t\9\u\k\x\p\9\a\m\n\1\x\1\z\4\y\f\t\0\s\7\l\6\4\k\u\0\e\s\0\q\z\0\s\f\4\d\x\4\b\7\g\a\t\s\3\7\a\6\q\2\l\y\w\l\0\b\6\a\d\s\u\e\x\x\s\o\l\p\y\o\e\a\s\o\9\z\b\m\u\s\5\v\l\z\8\t\e\d\9\7\0\l\z\j\u\s\o\o\r\0\3\b\q\b\c\v\b\f\6\w\v\3\d\q\7\e\w\e\d\1\j\3\m\w\3\6\k\9\j\2\g\n\z\6\7\l\k\3\8\n\u\v\n\l\h\z\p\6\i\l\j\f\0\h\j\e\g\w\u\s\w\x\s\9\f\e\i\1\o\2\3\z\7\5\9\b\c\f\4\2\0\r\r\e\y\i\y\p\b\d\4\2\l\z\p\l\i\k\h\s\7\6\s\c\r\l\f\t\g\b\5\1\o\1\g\c\v\z\c\w\v\w\e\h\o\1\e\e\z\r\w\w\z\y\g\p\l\x\v\i\2\v\t\u\o\a\k\g\r\2\x\8\e\c\t\y\o\q\g\g\6\k\f\d\p\t\6\q\i\m\k\8\h\j\q\2\s\x\y\1\n\s\e\y\g\m\e\1\u\x\u\2\m\3\7\a\5\4\d\j\y\l\o\b\d\g\2\4\o\m\e\b\6\r\h\u\4\0\a\0\w\r\3\j\c\7\n\k\8\2\4\4\y\f\m\p\u\q\w\r\s\9\r\7\6\n\i\8\i\t\d\s\1\k\b\o\0\f\6\2\v\r\1\l\l\4\d\k\j\m\9\7\p\k\z\p\s\r\u\7\l\5\v\d\n\w\a\4\u\o\9\x\p\9\o\8\j\u\q\5\c\q\8\j\f\s\i\r\d\q\n\z\b\u\l\4\z\x\n\b\5\r\4\y\s\r\c\6\q\d\6\r\p\s\f\z\2\q\1\7\n\x\c\h\s\d\s\d\o\p\z\q\b\3\w\i\5\3\1\k\2\d\5\2\h\s ]] 00:07:57.599 16:43:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:57.599 16:43:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:57.599 [2024-11-29 16:43:21.288978] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:57.599 [2024-11-29 16:43:21.289070] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74351 ] 00:07:57.858 [2024-11-29 16:43:21.413891] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.858 [2024-11-29 16:43:21.440227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.858 [2024-11-29 16:43:21.463253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.858 [2024-11-29 16:43:21.493928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.858  [2024-11-29T16:43:21.650Z] Copying: 512/512 [B] (average 100 kBps) 00:07:57.858 00:07:57.858 16:43:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 6x115i8e6nxi2lxfe79977r5h76y0xp6if71m5j6j8fk7t9ukxp9amn1x1z4yft0s7l64ku0es0qz0sf4dx4b7gats37a6q2lywl0b6adsuexxsolpyoeaso9zbmus5vlz8ted970lzjusoor03bqbcvbf6wv3dq7ewed1j3mw36k9j2gnz67lk38nuvnlhzp6iljf0hjegwuswxs9fei1o23z759bcf420rreyiypbd42lzplikhs76scrlftgb51o1gcvzcwvweho1eezrwwzygplxvi2vtuoakgr2x8ectyoqgg6kfdpt6qimk8hjq2sxy1nseygme1uxu2m37a54djylobdg24omeb6rhu40a0wr3jc7nk8244yfmpuqwrs9r76ni8itds1kbo0f62vr1ll4dkjm97pkzpsru7l5vdnwa4uo9xp9o8juq5cq8jfsirdqnzbul4zxnb5r4ysrc6qd6rpsfz2q17nxchsdsdopzqb3wi531k2d52hs == \6\x\1\1\5\i\8\e\6\n\x\i\2\l\x\f\e\7\9\9\7\7\r\5\h\7\6\y\0\x\p\6\i\f\7\1\m\5\j\6\j\8\f\k\7\t\9\u\k\x\p\9\a\m\n\1\x\1\z\4\y\f\t\0\s\7\l\6\4\k\u\0\e\s\0\q\z\0\s\f\4\d\x\4\b\7\g\a\t\s\3\7\a\6\q\2\l\y\w\l\0\b\6\a\d\s\u\e\x\x\s\o\l\p\y\o\e\a\s\o\9\z\b\m\u\s\5\v\l\z\8\t\e\d\9\7\0\l\z\j\u\s\o\o\r\0\3\b\q\b\c\v\b\f\6\w\v\3\d\q\7\e\w\e\d\1\j\3\m\w\3\6\k\9\j\2\g\n\z\6\7\l\k\3\8\n\u\v\n\l\h\z\p\6\i\l\j\f\0\h\j\e\g\w\u\s\w\x\s\9\f\e\i\1\o\2\3\z\7\5\9\b\c\f\4\2\0\r\r\e\y\i\y\p\b\d\4\2\l\z\p\l\i\k\h\s\7\6\s\c\r\l\f\t\g\b\5\1\o\1\g\c\v\z\c\w\v\w\e\h\o\1\e\e\z\r\w\w\z\y\g\p\l\x\v\i\2\v\t\u\o\a\k\g\r\2\x\8\e\c\t\y\o\q\g\g\6\k\f\d\p\t\6\q\i\m\k\8\h\j\q\2\s\x\y\1\n\s\e\y\g\m\e\1\u\x\u\2\m\3\7\a\5\4\d\j\y\l\o\b\d\g\2\4\o\m\e\b\6\r\h\u\4\0\a\0\w\r\3\j\c\7\n\k\8\2\4\4\y\f\m\p\u\q\w\r\s\9\r\7\6\n\i\8\i\t\d\s\1\k\b\o\0\f\6\2\v\r\1\l\l\4\d\k\j\m\9\7\p\k\z\p\s\r\u\7\l\5\v\d\n\w\a\4\u\o\9\x\p\9\o\8\j\u\q\5\c\q\8\j\f\s\i\r\d\q\n\z\b\u\l\4\z\x\n\b\5\r\4\y\s\r\c\6\q\d\6\r\p\s\f\z\2\q\1\7\n\x\c\h\s\d\s\d\o\p\z\q\b\3\w\i\5\3\1\k\2\d\5\2\h\s ]] 00:07:57.858 00:07:57.858 real 0m3.214s 00:07:57.858 user 0m1.563s 00:07:57.858 sys 0m1.440s 00:07:57.858 ************************************ 00:07:57.858 END TEST dd_flags_misc 00:07:57.858 ************************************ 00:07:57.858 16:43:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.858 16:43:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:58.117 * Second test run, disabling liburing, forcing AIO 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:58.117 ************************************ 00:07:58.117 START TEST dd_flag_append_forced_aio 00:07:58.117 ************************************ 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=wyfh0tnzvi1v74822rji42i3e3y1o3p4 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=c5gzn95rvwlm40wzgqg3wg5thzss65f1 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s wyfh0tnzvi1v74822rji42i3e3y1o3p4 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s c5gzn95rvwlm40wzgqg3wg5thzss65f1 00:07:58.117 16:43:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:58.117 [2024-11-29 16:43:21.720147] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:58.117 [2024-11-29 16:43:21.720244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74385 ] 00:07:58.117 [2024-11-29 16:43:21.837393] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:58.117 [2024-11-29 16:43:21.860917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.117 [2024-11-29 16:43:21.881950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.376 [2024-11-29 16:43:21.910680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.376  [2024-11-29T16:43:22.168Z] Copying: 32/32 [B] (average 31 kBps) 00:07:58.376 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ c5gzn95rvwlm40wzgqg3wg5thzss65f1wyfh0tnzvi1v74822rji42i3e3y1o3p4 == \c\5\g\z\n\9\5\r\v\w\l\m\4\0\w\z\g\q\g\3\w\g\5\t\h\z\s\s\6\5\f\1\w\y\f\h\0\t\n\z\v\i\1\v\7\4\8\2\2\r\j\i\4\2\i\3\e\3\y\1\o\3\p\4 ]] 00:07:58.376 00:07:58.376 real 0m0.387s 00:07:58.376 user 0m0.193s 00:07:58.376 sys 0m0.076s 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.376 ************************************ 00:07:58.376 END TEST dd_flag_append_forced_aio 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.376 ************************************ 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:58.376 ************************************ 00:07:58.376 START TEST dd_flag_directory_forced_aio 00:07:58.376 ************************************ 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.376 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.635 [2024-11-29 16:43:22.169617] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:58.635 [2024-11-29 16:43:22.169713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74406 ] 00:07:58.635 [2024-11-29 16:43:22.294844] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:58.635 [2024-11-29 16:43:22.322392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.635 [2024-11-29 16:43:22.340465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.635 [2024-11-29 16:43:22.368287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.635 [2024-11-29 16:43:22.383706] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.635 [2024-11-29 16:43:22.383773] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.635 [2024-11-29 16:43:22.383789] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.893 [2024-11-29 16:43:22.446243] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.893 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.893 [2024-11-29 16:43:22.556307] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:58.893 [2024-11-29 16:43:22.556433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74421 ] 00:07:58.893 [2024-11-29 16:43:22.680882] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:59.152 [2024-11-29 16:43:22.709825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.152 [2024-11-29 16:43:22.731165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.152 [2024-11-29 16:43:22.758696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.152 [2024-11-29 16:43:22.774064] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:59.152 [2024-11-29 16:43:22.774123] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:59.152 [2024-11-29 16:43:22.774140] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.153 [2024-11-29 16:43:22.830997] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.153 00:07:59.153 real 0m0.774s 00:07:59.153 user 0m0.375s 00:07:59.153 sys 0m0.192s 00:07:59.153 ************************************ 00:07:59.153 END TEST dd_flag_directory_forced_aio 00:07:59.153 ************************************ 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:59.153 ************************************ 00:07:59.153 START TEST dd_flag_nofollow_forced_aio 00:07:59.153 ************************************ 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:59.153 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:59.412 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.412 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:59.412 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.412 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.412 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.412 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.412 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.412 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.412 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.412 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.412 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.412 16:43:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.412 [2024-11-29 16:43:23.002830] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:59.412 [2024-11-29 16:43:23.002928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74444 ] 00:07:59.412 [2024-11-29 16:43:23.127874] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:59.412 [2024-11-29 16:43:23.150632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.412 [2024-11-29 16:43:23.170166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.412 [2024-11-29 16:43:23.197235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.671 [2024-11-29 16:43:23.214635] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:59.671 [2024-11-29 16:43:23.214695] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:59.671 [2024-11-29 16:43:23.214710] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.671 [2024-11-29 16:43:23.275276] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:59.671 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:59.671 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.671 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:59.671 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:59.671 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:59.671 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.671 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.671 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:59.672 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.672 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.672 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.672 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.672 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.672 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.672 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.672 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.672 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.672 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.672 [2024-11-29 16:43:23.389373] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:07:59.672 [2024-11-29 16:43:23.389460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74448 ] 00:07:59.931 [2024-11-29 16:43:23.513664] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:59.931 [2024-11-29 16:43:23.538982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.931 [2024-11-29 16:43:23.558614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.931 [2024-11-29 16:43:23.585299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.931 [2024-11-29 16:43:23.601959] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:59.931 [2024-11-29 16:43:23.602005] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:59.931 [2024-11-29 16:43:23.602023] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.931 [2024-11-29 16:43:23.659731] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:59.931 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:59.931 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.931 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:59.931 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:59.931 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:59.931 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.931 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:59.931 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:59.931 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:00.190 16:43:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.190 [2024-11-29 16:43:23.778001] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:00.190 [2024-11-29 16:43:23.778090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74461 ] 00:08:00.190 [2024-11-29 16:43:23.902608] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.190 [2024-11-29 16:43:23.924310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.190 [2024-11-29 16:43:23.942041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.190 [2024-11-29 16:43:23.968657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.450  [2024-11-29T16:43:24.242Z] Copying: 512/512 [B] (average 500 kBps) 00:08:00.450 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ jzkpzxr4h6wqf0giq8q5nxiuyv64a7o8kfu4dnhqynjrph3cep2zl5cy1b2522yk3mci99pzrv18rfndxw1h01bi0pjx4l38c5bxqi975ouysmgh4xda8061iz1fr11uoqrg42a14ajfxghawj6p9hd7yr94et7zzvj080099gf4bzrislstywncht764b810s545t1edqbfqkbpqkze0bxl9rpa5m0d9adnxcx7qc7uenjbw2okx4uvhc2hnnpn7s8wd5dags1u8jhbdyjl31qik95j5tnygcfaceb1ea0sginlz54t0sd2m8awtjbj5ocnjn3f5hkqvvi0mpgaim0ttl3pl1il6qdpq4f69l8yu8k1dme3os25jt90tenfvmtbd2g5dyhb0bsfpa3xj0u1opcb4r43aod1cuaqv0v2wbbq3o329sr9wzkof7obel4qci6pc13x5mxpymsvfxyqb17yyahqb3ure5onzcxr5n6nrrh2n0wnvvnu6610 == \j\z\k\p\z\x\r\4\h\6\w\q\f\0\g\i\q\8\q\5\n\x\i\u\y\v\6\4\a\7\o\8\k\f\u\4\d\n\h\q\y\n\j\r\p\h\3\c\e\p\2\z\l\5\c\y\1\b\2\5\2\2\y\k\3\m\c\i\9\9\p\z\r\v\1\8\r\f\n\d\x\w\1\h\0\1\b\i\0\p\j\x\4\l\3\8\c\5\b\x\q\i\9\7\5\o\u\y\s\m\g\h\4\x\d\a\8\0\6\1\i\z\1\f\r\1\1\u\o\q\r\g\4\2\a\1\4\a\j\f\x\g\h\a\w\j\6\p\9\h\d\7\y\r\9\4\e\t\7\z\z\v\j\0\8\0\0\9\9\g\f\4\b\z\r\i\s\l\s\t\y\w\n\c\h\t\7\6\4\b\8\1\0\s\5\4\5\t\1\e\d\q\b\f\q\k\b\p\q\k\z\e\0\b\x\l\9\r\p\a\5\m\0\d\9\a\d\n\x\c\x\7\q\c\7\u\e\n\j\b\w\2\o\k\x\4\u\v\h\c\2\h\n\n\p\n\7\s\8\w\d\5\d\a\g\s\1\u\8\j\h\b\d\y\j\l\3\1\q\i\k\9\5\j\5\t\n\y\g\c\f\a\c\e\b\1\e\a\0\s\g\i\n\l\z\5\4\t\0\s\d\2\m\8\a\w\t\j\b\j\5\o\c\n\j\n\3\f\5\h\k\q\v\v\i\0\m\p\g\a\i\m\0\t\t\l\3\p\l\1\i\l\6\q\d\p\q\4\f\6\9\l\8\y\u\8\k\1\d\m\e\3\o\s\2\5\j\t\9\0\t\e\n\f\v\m\t\b\d\2\g\5\d\y\h\b\0\b\s\f\p\a\3\x\j\0\u\1\o\p\c\b\4\r\4\3\a\o\d\1\c\u\a\q\v\0\v\2\w\b\b\q\3\o\3\2\9\s\r\9\w\z\k\o\f\7\o\b\e\l\4\q\c\i\6\p\c\1\3\x\5\m\x\p\y\m\s\v\f\x\y\q\b\1\7\y\y\a\h\q\b\3\u\r\e\5\o\n\z\c\x\r\5\n\6\n\r\r\h\2\n\0\w\n\v\v\n\u\6\6\1\0 ]] 00:08:00.450 00:08:00.450 real 0m1.172s 00:08:00.450 user 0m0.569s 00:08:00.450 sys 0m0.276s 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.450 ************************************ 00:08:00.450 END TEST dd_flag_nofollow_forced_aio 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:00.450 ************************************ 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:00.450 ************************************ 00:08:00.450 START TEST dd_flag_noatime_forced_aio 00:08:00.450 ************************************ 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732898603 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732898604 00:08:00.450 16:43:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:01.827 16:43:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.827 [2024-11-29 16:43:25.243852] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:01.827 [2024-11-29 16:43:25.243974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74496 ] 00:08:01.827 [2024-11-29 16:43:25.369753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:01.827 [2024-11-29 16:43:25.402907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.827 [2024-11-29 16:43:25.426307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.827 [2024-11-29 16:43:25.459568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.827  [2024-11-29T16:43:25.619Z] Copying: 512/512 [B] (average 500 kBps) 00:08:01.827 00:08:02.086 16:43:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.086 16:43:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732898603 )) 00:08:02.086 16:43:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.086 16:43:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732898604 )) 00:08:02.086 16:43:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.086 [2024-11-29 16:43:25.685878] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:02.086 [2024-11-29 16:43:25.685971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74512 ] 00:08:02.086 [2024-11-29 16:43:25.810405] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.086 [2024-11-29 16:43:25.836177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.086 [2024-11-29 16:43:25.856589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.345 [2024-11-29 16:43:25.886629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.345  [2024-11-29T16:43:26.137Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.345 00:08:02.345 16:43:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.345 16:43:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732898605 )) 00:08:02.345 00:08:02.345 real 0m1.862s 00:08:02.345 user 0m0.408s 00:08:02.345 sys 0m0.211s 00:08:02.345 16:43:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.345 ************************************ 00:08:02.345 END TEST dd_flag_noatime_forced_aio 00:08:02.345 16:43:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:02.345 ************************************ 00:08:02.345 16:43:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:02.345 16:43:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.345 16:43:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.345 16:43:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:02.345 ************************************ 00:08:02.345 START TEST dd_flags_misc_forced_aio 00:08:02.345 ************************************ 00:08:02.345 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:02.345 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:02.345 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:02.345 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:02.346 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:02.346 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:02.346 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:02.346 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:02.346 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.346 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:02.346 [2024-11-29 16:43:26.127633] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:02.346 [2024-11-29 16:43:26.127702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74534 ] 00:08:02.605 [2024-11-29 16:43:26.245643] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.605 [2024-11-29 16:43:26.266647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.605 [2024-11-29 16:43:26.284924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.605 [2024-11-29 16:43:26.311340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.605  [2024-11-29T16:43:26.656Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.864 00:08:02.864 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ogt49cwor93g6nmc3d5w3fo1pircxhf2k0pstdq65shnhhqtmwz10x4ahafsjsthfpfvo3fqnwuvcx012n8iiu2ggkn1dg7v55wzztk2s6oc4bq3xh7fw004a2grrj4e4fjf7ze4tljbu41b7b1xq4riqqexg3diuuznhufqu3m5664orccywy19luepw8vzre1x8f8f09w9x6qtjlkgqoio9i58fe9m2017fuy2yy0k8jgc348nvtzx0obpi3xmqpp7itt739jtvlliutuh6842ooklt8hf9th91s2wkgbssopef3z9v04apq7sbktokadodxqyeywvsagcskstr2zi4ys2ldjnhzn45zmejj3wvgm3itelutcymkgy6uk8l0mrexu4h1k497lc50zqdkk1vpd9x2ng4mhoq6i4jdi0zhuitqh2ajbs8vietvvzx4uw1dprav5huxy9b8s9ovcsu9bhqsrhk5kg3iij3oy7856vf9kvsd3ncsgd8btq == \o\g\t\4\9\c\w\o\r\9\3\g\6\n\m\c\3\d\5\w\3\f\o\1\p\i\r\c\x\h\f\2\k\0\p\s\t\d\q\6\5\s\h\n\h\h\q\t\m\w\z\1\0\x\4\a\h\a\f\s\j\s\t\h\f\p\f\v\o\3\f\q\n\w\u\v\c\x\0\1\2\n\8\i\i\u\2\g\g\k\n\1\d\g\7\v\5\5\w\z\z\t\k\2\s\6\o\c\4\b\q\3\x\h\7\f\w\0\0\4\a\2\g\r\r\j\4\e\4\f\j\f\7\z\e\4\t\l\j\b\u\4\1\b\7\b\1\x\q\4\r\i\q\q\e\x\g\3\d\i\u\u\z\n\h\u\f\q\u\3\m\5\6\6\4\o\r\c\c\y\w\y\1\9\l\u\e\p\w\8\v\z\r\e\1\x\8\f\8\f\0\9\w\9\x\6\q\t\j\l\k\g\q\o\i\o\9\i\5\8\f\e\9\m\2\0\1\7\f\u\y\2\y\y\0\k\8\j\g\c\3\4\8\n\v\t\z\x\0\o\b\p\i\3\x\m\q\p\p\7\i\t\t\7\3\9\j\t\v\l\l\i\u\t\u\h\6\8\4\2\o\o\k\l\t\8\h\f\9\t\h\9\1\s\2\w\k\g\b\s\s\o\p\e\f\3\z\9\v\0\4\a\p\q\7\s\b\k\t\o\k\a\d\o\d\x\q\y\e\y\w\v\s\a\g\c\s\k\s\t\r\2\z\i\4\y\s\2\l\d\j\n\h\z\n\4\5\z\m\e\j\j\3\w\v\g\m\3\i\t\e\l\u\t\c\y\m\k\g\y\6\u\k\8\l\0\m\r\e\x\u\4\h\1\k\4\9\7\l\c\5\0\z\q\d\k\k\1\v\p\d\9\x\2\n\g\4\m\h\o\q\6\i\4\j\d\i\0\z\h\u\i\t\q\h\2\a\j\b\s\8\v\i\e\t\v\v\z\x\4\u\w\1\d\p\r\a\v\5\h\u\x\y\9\b\8\s\9\o\v\c\s\u\9\b\h\q\s\r\h\k\5\k\g\3\i\i\j\3\o\y\7\8\5\6\v\f\9\k\v\s\d\3\n\c\s\g\d\8\b\t\q ]] 00:08:02.864 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.864 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:02.864 [2024-11-29 16:43:26.509318] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:02.864 [2024-11-29 16:43:26.509437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74536 ] 00:08:02.864 [2024-11-29 16:43:26.633876] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.123 [2024-11-29 16:43:26.662926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.123 [2024-11-29 16:43:26.683375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.123 [2024-11-29 16:43:26.710266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.123  [2024-11-29T16:43:26.915Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.123 00:08:03.123 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ogt49cwor93g6nmc3d5w3fo1pircxhf2k0pstdq65shnhhqtmwz10x4ahafsjsthfpfvo3fqnwuvcx012n8iiu2ggkn1dg7v55wzztk2s6oc4bq3xh7fw004a2grrj4e4fjf7ze4tljbu41b7b1xq4riqqexg3diuuznhufqu3m5664orccywy19luepw8vzre1x8f8f09w9x6qtjlkgqoio9i58fe9m2017fuy2yy0k8jgc348nvtzx0obpi3xmqpp7itt739jtvlliutuh6842ooklt8hf9th91s2wkgbssopef3z9v04apq7sbktokadodxqyeywvsagcskstr2zi4ys2ldjnhzn45zmejj3wvgm3itelutcymkgy6uk8l0mrexu4h1k497lc50zqdkk1vpd9x2ng4mhoq6i4jdi0zhuitqh2ajbs8vietvvzx4uw1dprav5huxy9b8s9ovcsu9bhqsrhk5kg3iij3oy7856vf9kvsd3ncsgd8btq == \o\g\t\4\9\c\w\o\r\9\3\g\6\n\m\c\3\d\5\w\3\f\o\1\p\i\r\c\x\h\f\2\k\0\p\s\t\d\q\6\5\s\h\n\h\h\q\t\m\w\z\1\0\x\4\a\h\a\f\s\j\s\t\h\f\p\f\v\o\3\f\q\n\w\u\v\c\x\0\1\2\n\8\i\i\u\2\g\g\k\n\1\d\g\7\v\5\5\w\z\z\t\k\2\s\6\o\c\4\b\q\3\x\h\7\f\w\0\0\4\a\2\g\r\r\j\4\e\4\f\j\f\7\z\e\4\t\l\j\b\u\4\1\b\7\b\1\x\q\4\r\i\q\q\e\x\g\3\d\i\u\u\z\n\h\u\f\q\u\3\m\5\6\6\4\o\r\c\c\y\w\y\1\9\l\u\e\p\w\8\v\z\r\e\1\x\8\f\8\f\0\9\w\9\x\6\q\t\j\l\k\g\q\o\i\o\9\i\5\8\f\e\9\m\2\0\1\7\f\u\y\2\y\y\0\k\8\j\g\c\3\4\8\n\v\t\z\x\0\o\b\p\i\3\x\m\q\p\p\7\i\t\t\7\3\9\j\t\v\l\l\i\u\t\u\h\6\8\4\2\o\o\k\l\t\8\h\f\9\t\h\9\1\s\2\w\k\g\b\s\s\o\p\e\f\3\z\9\v\0\4\a\p\q\7\s\b\k\t\o\k\a\d\o\d\x\q\y\e\y\w\v\s\a\g\c\s\k\s\t\r\2\z\i\4\y\s\2\l\d\j\n\h\z\n\4\5\z\m\e\j\j\3\w\v\g\m\3\i\t\e\l\u\t\c\y\m\k\g\y\6\u\k\8\l\0\m\r\e\x\u\4\h\1\k\4\9\7\l\c\5\0\z\q\d\k\k\1\v\p\d\9\x\2\n\g\4\m\h\o\q\6\i\4\j\d\i\0\z\h\u\i\t\q\h\2\a\j\b\s\8\v\i\e\t\v\v\z\x\4\u\w\1\d\p\r\a\v\5\h\u\x\y\9\b\8\s\9\o\v\c\s\u\9\b\h\q\s\r\h\k\5\k\g\3\i\i\j\3\o\y\7\8\5\6\v\f\9\k\v\s\d\3\n\c\s\g\d\8\b\t\q ]] 00:08:03.123 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.123 16:43:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:03.123 [2024-11-29 16:43:26.901497] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:03.123 [2024-11-29 16:43:26.901591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74549 ] 00:08:03.382 [2024-11-29 16:43:27.027183] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.382 [2024-11-29 16:43:27.052456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.382 [2024-11-29 16:43:27.071053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.382 [2024-11-29 16:43:27.097839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.382  [2024-11-29T16:43:27.433Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.641 00:08:03.642 16:43:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ogt49cwor93g6nmc3d5w3fo1pircxhf2k0pstdq65shnhhqtmwz10x4ahafsjsthfpfvo3fqnwuvcx012n8iiu2ggkn1dg7v55wzztk2s6oc4bq3xh7fw004a2grrj4e4fjf7ze4tljbu41b7b1xq4riqqexg3diuuznhufqu3m5664orccywy19luepw8vzre1x8f8f09w9x6qtjlkgqoio9i58fe9m2017fuy2yy0k8jgc348nvtzx0obpi3xmqpp7itt739jtvlliutuh6842ooklt8hf9th91s2wkgbssopef3z9v04apq7sbktokadodxqyeywvsagcskstr2zi4ys2ldjnhzn45zmejj3wvgm3itelutcymkgy6uk8l0mrexu4h1k497lc50zqdkk1vpd9x2ng4mhoq6i4jdi0zhuitqh2ajbs8vietvvzx4uw1dprav5huxy9b8s9ovcsu9bhqsrhk5kg3iij3oy7856vf9kvsd3ncsgd8btq == \o\g\t\4\9\c\w\o\r\9\3\g\6\n\m\c\3\d\5\w\3\f\o\1\p\i\r\c\x\h\f\2\k\0\p\s\t\d\q\6\5\s\h\n\h\h\q\t\m\w\z\1\0\x\4\a\h\a\f\s\j\s\t\h\f\p\f\v\o\3\f\q\n\w\u\v\c\x\0\1\2\n\8\i\i\u\2\g\g\k\n\1\d\g\7\v\5\5\w\z\z\t\k\2\s\6\o\c\4\b\q\3\x\h\7\f\w\0\0\4\a\2\g\r\r\j\4\e\4\f\j\f\7\z\e\4\t\l\j\b\u\4\1\b\7\b\1\x\q\4\r\i\q\q\e\x\g\3\d\i\u\u\z\n\h\u\f\q\u\3\m\5\6\6\4\o\r\c\c\y\w\y\1\9\l\u\e\p\w\8\v\z\r\e\1\x\8\f\8\f\0\9\w\9\x\6\q\t\j\l\k\g\q\o\i\o\9\i\5\8\f\e\9\m\2\0\1\7\f\u\y\2\y\y\0\k\8\j\g\c\3\4\8\n\v\t\z\x\0\o\b\p\i\3\x\m\q\p\p\7\i\t\t\7\3\9\j\t\v\l\l\i\u\t\u\h\6\8\4\2\o\o\k\l\t\8\h\f\9\t\h\9\1\s\2\w\k\g\b\s\s\o\p\e\f\3\z\9\v\0\4\a\p\q\7\s\b\k\t\o\k\a\d\o\d\x\q\y\e\y\w\v\s\a\g\c\s\k\s\t\r\2\z\i\4\y\s\2\l\d\j\n\h\z\n\4\5\z\m\e\j\j\3\w\v\g\m\3\i\t\e\l\u\t\c\y\m\k\g\y\6\u\k\8\l\0\m\r\e\x\u\4\h\1\k\4\9\7\l\c\5\0\z\q\d\k\k\1\v\p\d\9\x\2\n\g\4\m\h\o\q\6\i\4\j\d\i\0\z\h\u\i\t\q\h\2\a\j\b\s\8\v\i\e\t\v\v\z\x\4\u\w\1\d\p\r\a\v\5\h\u\x\y\9\b\8\s\9\o\v\c\s\u\9\b\h\q\s\r\h\k\5\k\g\3\i\i\j\3\o\y\7\8\5\6\v\f\9\k\v\s\d\3\n\c\s\g\d\8\b\t\q ]] 00:08:03.642 16:43:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.642 16:43:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:03.642 [2024-11-29 16:43:27.300854] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:03.642 [2024-11-29 16:43:27.300953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74551 ] 00:08:03.642 [2024-11-29 16:43:27.425465] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.901 [2024-11-29 16:43:27.451488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.901 [2024-11-29 16:43:27.469189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.901 [2024-11-29 16:43:27.495991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.901  [2024-11-29T16:43:27.693Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.901 00:08:03.901 16:43:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ogt49cwor93g6nmc3d5w3fo1pircxhf2k0pstdq65shnhhqtmwz10x4ahafsjsthfpfvo3fqnwuvcx012n8iiu2ggkn1dg7v55wzztk2s6oc4bq3xh7fw004a2grrj4e4fjf7ze4tljbu41b7b1xq4riqqexg3diuuznhufqu3m5664orccywy19luepw8vzre1x8f8f09w9x6qtjlkgqoio9i58fe9m2017fuy2yy0k8jgc348nvtzx0obpi3xmqpp7itt739jtvlliutuh6842ooklt8hf9th91s2wkgbssopef3z9v04apq7sbktokadodxqyeywvsagcskstr2zi4ys2ldjnhzn45zmejj3wvgm3itelutcymkgy6uk8l0mrexu4h1k497lc50zqdkk1vpd9x2ng4mhoq6i4jdi0zhuitqh2ajbs8vietvvzx4uw1dprav5huxy9b8s9ovcsu9bhqsrhk5kg3iij3oy7856vf9kvsd3ncsgd8btq == \o\g\t\4\9\c\w\o\r\9\3\g\6\n\m\c\3\d\5\w\3\f\o\1\p\i\r\c\x\h\f\2\k\0\p\s\t\d\q\6\5\s\h\n\h\h\q\t\m\w\z\1\0\x\4\a\h\a\f\s\j\s\t\h\f\p\f\v\o\3\f\q\n\w\u\v\c\x\0\1\2\n\8\i\i\u\2\g\g\k\n\1\d\g\7\v\5\5\w\z\z\t\k\2\s\6\o\c\4\b\q\3\x\h\7\f\w\0\0\4\a\2\g\r\r\j\4\e\4\f\j\f\7\z\e\4\t\l\j\b\u\4\1\b\7\b\1\x\q\4\r\i\q\q\e\x\g\3\d\i\u\u\z\n\h\u\f\q\u\3\m\5\6\6\4\o\r\c\c\y\w\y\1\9\l\u\e\p\w\8\v\z\r\e\1\x\8\f\8\f\0\9\w\9\x\6\q\t\j\l\k\g\q\o\i\o\9\i\5\8\f\e\9\m\2\0\1\7\f\u\y\2\y\y\0\k\8\j\g\c\3\4\8\n\v\t\z\x\0\o\b\p\i\3\x\m\q\p\p\7\i\t\t\7\3\9\j\t\v\l\l\i\u\t\u\h\6\8\4\2\o\o\k\l\t\8\h\f\9\t\h\9\1\s\2\w\k\g\b\s\s\o\p\e\f\3\z\9\v\0\4\a\p\q\7\s\b\k\t\o\k\a\d\o\d\x\q\y\e\y\w\v\s\a\g\c\s\k\s\t\r\2\z\i\4\y\s\2\l\d\j\n\h\z\n\4\5\z\m\e\j\j\3\w\v\g\m\3\i\t\e\l\u\t\c\y\m\k\g\y\6\u\k\8\l\0\m\r\e\x\u\4\h\1\k\4\9\7\l\c\5\0\z\q\d\k\k\1\v\p\d\9\x\2\n\g\4\m\h\o\q\6\i\4\j\d\i\0\z\h\u\i\t\q\h\2\a\j\b\s\8\v\i\e\t\v\v\z\x\4\u\w\1\d\p\r\a\v\5\h\u\x\y\9\b\8\s\9\o\v\c\s\u\9\b\h\q\s\r\h\k\5\k\g\3\i\i\j\3\o\y\7\8\5\6\v\f\9\k\v\s\d\3\n\c\s\g\d\8\b\t\q ]] 00:08:03.901 16:43:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:03.901 16:43:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:03.901 16:43:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:03.901 16:43:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:03.901 16:43:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.901 16:43:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:04.160 [2024-11-29 16:43:27.730819] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:04.160 [2024-11-29 16:43:27.730944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74559 ] 00:08:04.160 [2024-11-29 16:43:27.855557] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:04.160 [2024-11-29 16:43:27.888075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.160 [2024-11-29 16:43:27.913620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.160 [2024-11-29 16:43:27.946895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.419  [2024-11-29T16:43:28.211Z] Copying: 512/512 [B] (average 500 kBps) 00:08:04.419 00:08:04.419 16:43:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gtunq4218juvkfikx0lt1ekaysx0hgtbr274nf72wcg7s5711ickb5ymq51yp9f2h0nudpqv1qvlc0y2qn36onj683p09vh9h4wph9ow6y17riwol8j8q00q002o3zhv2wcgrf57edll4xflhsw5wqzgyfru1zdk7kb5ql8thnzws2dksyryl7ff97ia2ua1it0z0bt0plrkpj1pgwojl95qx9q06ib27chtbheqrfy0suf68fni4fkso61s5pgbu2amfxvluskci0toi2yzohrrfqe92kla9cn82cpo94eukchpoem2h8nf6xan83g2wm2alocx66rw89adg3gakjb4otknl34zpecgd0f2kdeqgh8g9pg8dquvybf7hlbyknpjmm25tq97b5tacrmj8597je6tfzax1gi1y9bci15jx7a2rz6ijl7ocrexc0xsj6hw1n3ntz9twt44qyt85l1ywlm93w2s6bl9y7i9ktqsa0tvryvqyo3z1iyo9dpz == \g\t\u\n\q\4\2\1\8\j\u\v\k\f\i\k\x\0\l\t\1\e\k\a\y\s\x\0\h\g\t\b\r\2\7\4\n\f\7\2\w\c\g\7\s\5\7\1\1\i\c\k\b\5\y\m\q\5\1\y\p\9\f\2\h\0\n\u\d\p\q\v\1\q\v\l\c\0\y\2\q\n\3\6\o\n\j\6\8\3\p\0\9\v\h\9\h\4\w\p\h\9\o\w\6\y\1\7\r\i\w\o\l\8\j\8\q\0\0\q\0\0\2\o\3\z\h\v\2\w\c\g\r\f\5\7\e\d\l\l\4\x\f\l\h\s\w\5\w\q\z\g\y\f\r\u\1\z\d\k\7\k\b\5\q\l\8\t\h\n\z\w\s\2\d\k\s\y\r\y\l\7\f\f\9\7\i\a\2\u\a\1\i\t\0\z\0\b\t\0\p\l\r\k\p\j\1\p\g\w\o\j\l\9\5\q\x\9\q\0\6\i\b\2\7\c\h\t\b\h\e\q\r\f\y\0\s\u\f\6\8\f\n\i\4\f\k\s\o\6\1\s\5\p\g\b\u\2\a\m\f\x\v\l\u\s\k\c\i\0\t\o\i\2\y\z\o\h\r\r\f\q\e\9\2\k\l\a\9\c\n\8\2\c\p\o\9\4\e\u\k\c\h\p\o\e\m\2\h\8\n\f\6\x\a\n\8\3\g\2\w\m\2\a\l\o\c\x\6\6\r\w\8\9\a\d\g\3\g\a\k\j\b\4\o\t\k\n\l\3\4\z\p\e\c\g\d\0\f\2\k\d\e\q\g\h\8\g\9\p\g\8\d\q\u\v\y\b\f\7\h\l\b\y\k\n\p\j\m\m\2\5\t\q\9\7\b\5\t\a\c\r\m\j\8\5\9\7\j\e\6\t\f\z\a\x\1\g\i\1\y\9\b\c\i\1\5\j\x\7\a\2\r\z\6\i\j\l\7\o\c\r\e\x\c\0\x\s\j\6\h\w\1\n\3\n\t\z\9\t\w\t\4\4\q\y\t\8\5\l\1\y\w\l\m\9\3\w\2\s\6\b\l\9\y\7\i\9\k\t\q\s\a\0\t\v\r\y\v\q\y\o\3\z\1\i\y\o\9\d\p\z ]] 00:08:04.419 16:43:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:04.419 16:43:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:04.419 [2024-11-29 16:43:28.172964] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:04.419 [2024-11-29 16:43:28.173059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74566 ] 00:08:04.677 [2024-11-29 16:43:28.297565] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:04.677 [2024-11-29 16:43:28.328596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.677 [2024-11-29 16:43:28.351430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.677 [2024-11-29 16:43:28.383642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.677  [2024-11-29T16:43:28.728Z] Copying: 512/512 [B] (average 500 kBps) 00:08:04.936 00:08:04.937 16:43:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gtunq4218juvkfikx0lt1ekaysx0hgtbr274nf72wcg7s5711ickb5ymq51yp9f2h0nudpqv1qvlc0y2qn36onj683p09vh9h4wph9ow6y17riwol8j8q00q002o3zhv2wcgrf57edll4xflhsw5wqzgyfru1zdk7kb5ql8thnzws2dksyryl7ff97ia2ua1it0z0bt0plrkpj1pgwojl95qx9q06ib27chtbheqrfy0suf68fni4fkso61s5pgbu2amfxvluskci0toi2yzohrrfqe92kla9cn82cpo94eukchpoem2h8nf6xan83g2wm2alocx66rw89adg3gakjb4otknl34zpecgd0f2kdeqgh8g9pg8dquvybf7hlbyknpjmm25tq97b5tacrmj8597je6tfzax1gi1y9bci15jx7a2rz6ijl7ocrexc0xsj6hw1n3ntz9twt44qyt85l1ywlm93w2s6bl9y7i9ktqsa0tvryvqyo3z1iyo9dpz == \g\t\u\n\q\4\2\1\8\j\u\v\k\f\i\k\x\0\l\t\1\e\k\a\y\s\x\0\h\g\t\b\r\2\7\4\n\f\7\2\w\c\g\7\s\5\7\1\1\i\c\k\b\5\y\m\q\5\1\y\p\9\f\2\h\0\n\u\d\p\q\v\1\q\v\l\c\0\y\2\q\n\3\6\o\n\j\6\8\3\p\0\9\v\h\9\h\4\w\p\h\9\o\w\6\y\1\7\r\i\w\o\l\8\j\8\q\0\0\q\0\0\2\o\3\z\h\v\2\w\c\g\r\f\5\7\e\d\l\l\4\x\f\l\h\s\w\5\w\q\z\g\y\f\r\u\1\z\d\k\7\k\b\5\q\l\8\t\h\n\z\w\s\2\d\k\s\y\r\y\l\7\f\f\9\7\i\a\2\u\a\1\i\t\0\z\0\b\t\0\p\l\r\k\p\j\1\p\g\w\o\j\l\9\5\q\x\9\q\0\6\i\b\2\7\c\h\t\b\h\e\q\r\f\y\0\s\u\f\6\8\f\n\i\4\f\k\s\o\6\1\s\5\p\g\b\u\2\a\m\f\x\v\l\u\s\k\c\i\0\t\o\i\2\y\z\o\h\r\r\f\q\e\9\2\k\l\a\9\c\n\8\2\c\p\o\9\4\e\u\k\c\h\p\o\e\m\2\h\8\n\f\6\x\a\n\8\3\g\2\w\m\2\a\l\o\c\x\6\6\r\w\8\9\a\d\g\3\g\a\k\j\b\4\o\t\k\n\l\3\4\z\p\e\c\g\d\0\f\2\k\d\e\q\g\h\8\g\9\p\g\8\d\q\u\v\y\b\f\7\h\l\b\y\k\n\p\j\m\m\2\5\t\q\9\7\b\5\t\a\c\r\m\j\8\5\9\7\j\e\6\t\f\z\a\x\1\g\i\1\y\9\b\c\i\1\5\j\x\7\a\2\r\z\6\i\j\l\7\o\c\r\e\x\c\0\x\s\j\6\h\w\1\n\3\n\t\z\9\t\w\t\4\4\q\y\t\8\5\l\1\y\w\l\m\9\3\w\2\s\6\b\l\9\y\7\i\9\k\t\q\s\a\0\t\v\r\y\v\q\y\o\3\z\1\i\y\o\9\d\p\z ]] 00:08:04.937 16:43:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:04.937 16:43:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:04.937 [2024-11-29 16:43:28.596167] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:04.937 [2024-11-29 16:43:28.596274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74568 ] 00:08:04.937 [2024-11-29 16:43:28.721324] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:05.195 [2024-11-29 16:43:28.753959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.195 [2024-11-29 16:43:28.779154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.195 [2024-11-29 16:43:28.812519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.195  [2024-11-29T16:43:28.987Z] Copying: 512/512 [B] (average 500 kBps) 00:08:05.195 00:08:05.195 16:43:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gtunq4218juvkfikx0lt1ekaysx0hgtbr274nf72wcg7s5711ickb5ymq51yp9f2h0nudpqv1qvlc0y2qn36onj683p09vh9h4wph9ow6y17riwol8j8q00q002o3zhv2wcgrf57edll4xflhsw5wqzgyfru1zdk7kb5ql8thnzws2dksyryl7ff97ia2ua1it0z0bt0plrkpj1pgwojl95qx9q06ib27chtbheqrfy0suf68fni4fkso61s5pgbu2amfxvluskci0toi2yzohrrfqe92kla9cn82cpo94eukchpoem2h8nf6xan83g2wm2alocx66rw89adg3gakjb4otknl34zpecgd0f2kdeqgh8g9pg8dquvybf7hlbyknpjmm25tq97b5tacrmj8597je6tfzax1gi1y9bci15jx7a2rz6ijl7ocrexc0xsj6hw1n3ntz9twt44qyt85l1ywlm93w2s6bl9y7i9ktqsa0tvryvqyo3z1iyo9dpz == \g\t\u\n\q\4\2\1\8\j\u\v\k\f\i\k\x\0\l\t\1\e\k\a\y\s\x\0\h\g\t\b\r\2\7\4\n\f\7\2\w\c\g\7\s\5\7\1\1\i\c\k\b\5\y\m\q\5\1\y\p\9\f\2\h\0\n\u\d\p\q\v\1\q\v\l\c\0\y\2\q\n\3\6\o\n\j\6\8\3\p\0\9\v\h\9\h\4\w\p\h\9\o\w\6\y\1\7\r\i\w\o\l\8\j\8\q\0\0\q\0\0\2\o\3\z\h\v\2\w\c\g\r\f\5\7\e\d\l\l\4\x\f\l\h\s\w\5\w\q\z\g\y\f\r\u\1\z\d\k\7\k\b\5\q\l\8\t\h\n\z\w\s\2\d\k\s\y\r\y\l\7\f\f\9\7\i\a\2\u\a\1\i\t\0\z\0\b\t\0\p\l\r\k\p\j\1\p\g\w\o\j\l\9\5\q\x\9\q\0\6\i\b\2\7\c\h\t\b\h\e\q\r\f\y\0\s\u\f\6\8\f\n\i\4\f\k\s\o\6\1\s\5\p\g\b\u\2\a\m\f\x\v\l\u\s\k\c\i\0\t\o\i\2\y\z\o\h\r\r\f\q\e\9\2\k\l\a\9\c\n\8\2\c\p\o\9\4\e\u\k\c\h\p\o\e\m\2\h\8\n\f\6\x\a\n\8\3\g\2\w\m\2\a\l\o\c\x\6\6\r\w\8\9\a\d\g\3\g\a\k\j\b\4\o\t\k\n\l\3\4\z\p\e\c\g\d\0\f\2\k\d\e\q\g\h\8\g\9\p\g\8\d\q\u\v\y\b\f\7\h\l\b\y\k\n\p\j\m\m\2\5\t\q\9\7\b\5\t\a\c\r\m\j\8\5\9\7\j\e\6\t\f\z\a\x\1\g\i\1\y\9\b\c\i\1\5\j\x\7\a\2\r\z\6\i\j\l\7\o\c\r\e\x\c\0\x\s\j\6\h\w\1\n\3\n\t\z\9\t\w\t\4\4\q\y\t\8\5\l\1\y\w\l\m\9\3\w\2\s\6\b\l\9\y\7\i\9\k\t\q\s\a\0\t\v\r\y\v\q\y\o\3\z\1\i\y\o\9\d\p\z ]] 00:08:05.195 16:43:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:05.195 16:43:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:05.454 [2024-11-29 16:43:29.029961] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:05.454 [2024-11-29 16:43:29.030068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74581 ] 00:08:05.454 [2024-11-29 16:43:29.154319] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:05.454 [2024-11-29 16:43:29.184210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.454 [2024-11-29 16:43:29.207608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.454 [2024-11-29 16:43:29.239945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.713  [2024-11-29T16:43:29.505Z] Copying: 512/512 [B] (average 500 kBps) 00:08:05.713 00:08:05.713 16:43:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gtunq4218juvkfikx0lt1ekaysx0hgtbr274nf72wcg7s5711ickb5ymq51yp9f2h0nudpqv1qvlc0y2qn36onj683p09vh9h4wph9ow6y17riwol8j8q00q002o3zhv2wcgrf57edll4xflhsw5wqzgyfru1zdk7kb5ql8thnzws2dksyryl7ff97ia2ua1it0z0bt0plrkpj1pgwojl95qx9q06ib27chtbheqrfy0suf68fni4fkso61s5pgbu2amfxvluskci0toi2yzohrrfqe92kla9cn82cpo94eukchpoem2h8nf6xan83g2wm2alocx66rw89adg3gakjb4otknl34zpecgd0f2kdeqgh8g9pg8dquvybf7hlbyknpjmm25tq97b5tacrmj8597je6tfzax1gi1y9bci15jx7a2rz6ijl7ocrexc0xsj6hw1n3ntz9twt44qyt85l1ywlm93w2s6bl9y7i9ktqsa0tvryvqyo3z1iyo9dpz == \g\t\u\n\q\4\2\1\8\j\u\v\k\f\i\k\x\0\l\t\1\e\k\a\y\s\x\0\h\g\t\b\r\2\7\4\n\f\7\2\w\c\g\7\s\5\7\1\1\i\c\k\b\5\y\m\q\5\1\y\p\9\f\2\h\0\n\u\d\p\q\v\1\q\v\l\c\0\y\2\q\n\3\6\o\n\j\6\8\3\p\0\9\v\h\9\h\4\w\p\h\9\o\w\6\y\1\7\r\i\w\o\l\8\j\8\q\0\0\q\0\0\2\o\3\z\h\v\2\w\c\g\r\f\5\7\e\d\l\l\4\x\f\l\h\s\w\5\w\q\z\g\y\f\r\u\1\z\d\k\7\k\b\5\q\l\8\t\h\n\z\w\s\2\d\k\s\y\r\y\l\7\f\f\9\7\i\a\2\u\a\1\i\t\0\z\0\b\t\0\p\l\r\k\p\j\1\p\g\w\o\j\l\9\5\q\x\9\q\0\6\i\b\2\7\c\h\t\b\h\e\q\r\f\y\0\s\u\f\6\8\f\n\i\4\f\k\s\o\6\1\s\5\p\g\b\u\2\a\m\f\x\v\l\u\s\k\c\i\0\t\o\i\2\y\z\o\h\r\r\f\q\e\9\2\k\l\a\9\c\n\8\2\c\p\o\9\4\e\u\k\c\h\p\o\e\m\2\h\8\n\f\6\x\a\n\8\3\g\2\w\m\2\a\l\o\c\x\6\6\r\w\8\9\a\d\g\3\g\a\k\j\b\4\o\t\k\n\l\3\4\z\p\e\c\g\d\0\f\2\k\d\e\q\g\h\8\g\9\p\g\8\d\q\u\v\y\b\f\7\h\l\b\y\k\n\p\j\m\m\2\5\t\q\9\7\b\5\t\a\c\r\m\j\8\5\9\7\j\e\6\t\f\z\a\x\1\g\i\1\y\9\b\c\i\1\5\j\x\7\a\2\r\z\6\i\j\l\7\o\c\r\e\x\c\0\x\s\j\6\h\w\1\n\3\n\t\z\9\t\w\t\4\4\q\y\t\8\5\l\1\y\w\l\m\9\3\w\2\s\6\b\l\9\y\7\i\9\k\t\q\s\a\0\t\v\r\y\v\q\y\o\3\z\1\i\y\o\9\d\p\z ]] 00:08:05.713 00:08:05.713 real 0m3.327s 00:08:05.713 user 0m1.593s 00:08:05.713 sys 0m0.738s 00:08:05.713 16:43:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.713 ************************************ 00:08:05.713 END TEST dd_flags_misc_forced_aio 00:08:05.713 16:43:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:05.713 ************************************ 00:08:05.713 16:43:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:05.713 16:43:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:05.713 16:43:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:05.713 00:08:05.713 real 0m15.678s 00:08:05.713 user 0m6.517s 00:08:05.713 sys 0m4.478s 00:08:05.713 16:43:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.713 16:43:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:05.713 ************************************ 00:08:05.713 END TEST spdk_dd_posix 00:08:05.713 ************************************ 00:08:05.713 16:43:29 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:05.713 16:43:29 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.713 16:43:29 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.713 16:43:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:05.713 ************************************ 00:08:05.713 START TEST spdk_dd_malloc 00:08:05.713 ************************************ 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:05.972 * Looking for test storage... 00:08:05.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.972 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:05.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.973 --rc genhtml_branch_coverage=1 00:08:05.973 --rc genhtml_function_coverage=1 00:08:05.973 --rc genhtml_legend=1 00:08:05.973 --rc geninfo_all_blocks=1 00:08:05.973 --rc geninfo_unexecuted_blocks=1 00:08:05.973 00:08:05.973 ' 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:05.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.973 --rc genhtml_branch_coverage=1 00:08:05.973 --rc genhtml_function_coverage=1 00:08:05.973 --rc genhtml_legend=1 00:08:05.973 --rc geninfo_all_blocks=1 00:08:05.973 --rc geninfo_unexecuted_blocks=1 00:08:05.973 00:08:05.973 ' 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:05.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.973 --rc genhtml_branch_coverage=1 00:08:05.973 --rc genhtml_function_coverage=1 00:08:05.973 --rc genhtml_legend=1 00:08:05.973 --rc geninfo_all_blocks=1 00:08:05.973 --rc geninfo_unexecuted_blocks=1 00:08:05.973 00:08:05.973 ' 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:05.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.973 --rc genhtml_branch_coverage=1 00:08:05.973 --rc genhtml_function_coverage=1 00:08:05.973 --rc genhtml_legend=1 00:08:05.973 --rc geninfo_all_blocks=1 00:08:05.973 --rc geninfo_unexecuted_blocks=1 00:08:05.973 00:08:05.973 ' 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:05.973 ************************************ 00:08:05.973 START TEST dd_malloc_copy 00:08:05.973 ************************************ 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:05.973 16:43:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:05.973 [2024-11-29 16:43:29.752877] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:05.973 [2024-11-29 16:43:29.752979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74658 ] 00:08:05.973 { 00:08:05.973 "subsystems": [ 00:08:05.973 { 00:08:05.973 "subsystem": "bdev", 00:08:05.973 "config": [ 00:08:05.973 { 00:08:05.973 "params": { 00:08:05.973 "block_size": 512, 00:08:05.973 "num_blocks": 1048576, 00:08:05.973 "name": "malloc0" 00:08:05.973 }, 00:08:05.973 "method": "bdev_malloc_create" 00:08:05.973 }, 00:08:05.973 { 00:08:05.973 "params": { 00:08:05.973 "block_size": 512, 00:08:05.973 "num_blocks": 1048576, 00:08:05.973 "name": "malloc1" 00:08:05.973 }, 00:08:05.973 "method": "bdev_malloc_create" 00:08:05.973 }, 00:08:05.973 { 00:08:05.973 "method": "bdev_wait_for_examine" 00:08:05.973 } 00:08:05.973 ] 00:08:05.973 } 00:08:05.973 ] 00:08:05.973 } 00:08:06.232 [2024-11-29 16:43:29.877673] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.232 [2024-11-29 16:43:29.907728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.232 [2024-11-29 16:43:29.931400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.232 [2024-11-29 16:43:29.965715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.610  [2024-11-29T16:43:32.339Z] Copying: 225/512 [MB] (225 MBps) [2024-11-29T16:43:32.598Z] Copying: 458/512 [MB] (232 MBps) [2024-11-29T16:43:32.857Z] Copying: 512/512 [MB] (average 230 MBps) 00:08:09.065 00:08:09.065 16:43:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:09.065 16:43:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:09.065 16:43:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:09.065 16:43:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:09.065 { 00:08:09.065 "subsystems": [ 00:08:09.065 { 00:08:09.065 "subsystem": "bdev", 00:08:09.065 "config": [ 00:08:09.065 { 00:08:09.065 "params": { 00:08:09.065 "block_size": 512, 00:08:09.065 "num_blocks": 1048576, 00:08:09.065 "name": "malloc0" 00:08:09.065 }, 00:08:09.065 "method": "bdev_malloc_create" 00:08:09.065 }, 00:08:09.065 { 00:08:09.065 "params": { 00:08:09.065 "block_size": 512, 00:08:09.065 "num_blocks": 1048576, 00:08:09.065 "name": "malloc1" 00:08:09.065 }, 00:08:09.065 "method": "bdev_malloc_create" 00:08:09.065 }, 00:08:09.065 { 00:08:09.065 "method": "bdev_wait_for_examine" 00:08:09.065 } 00:08:09.065 ] 00:08:09.065 } 00:08:09.065 ] 00:08:09.065 } 00:08:09.065 [2024-11-29 16:43:32.752397] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:09.065 [2024-11-29 16:43:32.752488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74699 ] 00:08:09.324 [2024-11-29 16:43:32.877055] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:09.324 [2024-11-29 16:43:32.901909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.324 [2024-11-29 16:43:32.922535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.324 [2024-11-29 16:43:32.952872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.702  [2024-11-29T16:43:35.430Z] Copying: 234/512 [MB] (234 MBps) [2024-11-29T16:43:35.430Z] Copying: 468/512 [MB] (234 MBps) [2024-11-29T16:43:35.689Z] Copying: 512/512 [MB] (average 234 MBps) 00:08:11.897 00:08:11.897 ************************************ 00:08:11.897 END TEST dd_malloc_copy 00:08:11.897 ************************************ 00:08:11.897 00:08:11.897 real 0m5.916s 00:08:11.897 user 0m5.263s 00:08:11.897 sys 0m0.497s 00:08:11.897 16:43:35 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.897 16:43:35 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:11.897 ************************************ 00:08:11.897 END TEST spdk_dd_malloc 00:08:11.897 ************************************ 00:08:11.897 00:08:11.897 real 0m6.157s 00:08:11.897 user 0m5.407s 00:08:11.897 sys 0m0.598s 00:08:11.897 16:43:35 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.897 16:43:35 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:12.158 16:43:35 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:12.158 16:43:35 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:12.158 16:43:35 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.158 16:43:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:12.158 ************************************ 00:08:12.158 START TEST spdk_dd_bdev_to_bdev 00:08:12.158 ************************************ 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:12.158 * Looking for test storage... 00:08:12.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.158 --rc genhtml_branch_coverage=1 00:08:12.158 --rc genhtml_function_coverage=1 00:08:12.158 --rc genhtml_legend=1 00:08:12.158 --rc geninfo_all_blocks=1 00:08:12.158 --rc geninfo_unexecuted_blocks=1 00:08:12.158 00:08:12.158 ' 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.158 --rc genhtml_branch_coverage=1 00:08:12.158 --rc genhtml_function_coverage=1 00:08:12.158 --rc genhtml_legend=1 00:08:12.158 --rc geninfo_all_blocks=1 00:08:12.158 --rc geninfo_unexecuted_blocks=1 00:08:12.158 00:08:12.158 ' 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.158 --rc genhtml_branch_coverage=1 00:08:12.158 --rc genhtml_function_coverage=1 00:08:12.158 --rc genhtml_legend=1 00:08:12.158 --rc geninfo_all_blocks=1 00:08:12.158 --rc geninfo_unexecuted_blocks=1 00:08:12.158 00:08:12.158 ' 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.158 --rc genhtml_branch_coverage=1 00:08:12.158 --rc genhtml_function_coverage=1 00:08:12.158 --rc genhtml_legend=1 00:08:12.158 --rc geninfo_all_blocks=1 00:08:12.158 --rc geninfo_unexecuted_blocks=1 00:08:12.158 00:08:12.158 ' 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:12.158 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:12.159 ************************************ 00:08:12.159 START TEST dd_inflate_file 00:08:12.159 ************************************ 00:08:12.159 16:43:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:12.418 [2024-11-29 16:43:35.981377] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:12.418 [2024-11-29 16:43:35.981642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74807 ] 00:08:12.418 [2024-11-29 16:43:36.106943] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:12.418 [2024-11-29 16:43:36.132598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.418 [2024-11-29 16:43:36.150539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.418 [2024-11-29 16:43:36.177299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.677  [2024-11-29T16:43:36.469Z] Copying: 64/64 [MB] (average 1600 MBps) 00:08:12.677 00:08:12.677 00:08:12.677 real 0m0.421s 00:08:12.677 user 0m0.229s 00:08:12.677 sys 0m0.208s 00:08:12.677 16:43:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.677 16:43:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:12.677 ************************************ 00:08:12.677 END TEST dd_inflate_file 00:08:12.677 ************************************ 00:08:12.677 16:43:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:12.677 16:43:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:12.677 16:43:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:12.677 16:43:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:12.677 16:43:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:12.677 16:43:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:12.677 16:43:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:12.677 16:43:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.677 16:43:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:12.677 ************************************ 00:08:12.677 START TEST dd_copy_to_out_bdev 00:08:12.677 ************************************ 00:08:12.677 16:43:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:12.677 { 00:08:12.677 "subsystems": [ 00:08:12.677 { 00:08:12.677 "subsystem": "bdev", 00:08:12.677 "config": [ 00:08:12.677 { 00:08:12.677 "params": { 00:08:12.677 "trtype": "pcie", 00:08:12.677 "traddr": "0000:00:10.0", 00:08:12.677 "name": "Nvme0" 00:08:12.677 }, 00:08:12.677 "method": "bdev_nvme_attach_controller" 00:08:12.677 }, 00:08:12.677 { 00:08:12.677 "params": { 00:08:12.677 "trtype": "pcie", 00:08:12.677 "traddr": "0000:00:11.0", 00:08:12.677 "name": "Nvme1" 00:08:12.677 }, 00:08:12.677 "method": "bdev_nvme_attach_controller" 00:08:12.677 }, 00:08:12.677 { 00:08:12.677 "method": "bdev_wait_for_examine" 00:08:12.677 } 00:08:12.677 ] 00:08:12.677 } 00:08:12.677 ] 00:08:12.677 } 00:08:12.677 [2024-11-29 16:43:36.459374] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:12.677 [2024-11-29 16:43:36.459623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74842 ] 00:08:12.936 [2024-11-29 16:43:36.584482] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:12.937 [2024-11-29 16:43:36.610993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.937 [2024-11-29 16:43:36.631469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.937 [2024-11-29 16:43:36.659116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.395  [2024-11-29T16:43:38.187Z] Copying: 52/64 [MB] (52 MBps) [2024-11-29T16:43:38.187Z] Copying: 64/64 [MB] (average 52 MBps) 00:08:14.395 00:08:14.395 00:08:14.395 real 0m1.769s 00:08:14.395 user 0m1.568s 00:08:14.395 sys 0m1.468s 00:08:14.395 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.395 ************************************ 00:08:14.395 END TEST dd_copy_to_out_bdev 00:08:14.395 ************************************ 00:08:14.395 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:14.654 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:14.654 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:14.654 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.654 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.654 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:14.654 ************************************ 00:08:14.654 START TEST dd_offset_magic 00:08:14.654 ************************************ 00:08:14.654 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:08:14.654 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:14.654 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:14.654 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:14.654 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:14.654 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:14.654 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:14.654 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:14.654 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:14.654 [2024-11-29 16:43:38.277223] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:14.654 [2024-11-29 16:43:38.277481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74885 ] 00:08:14.654 { 00:08:14.654 "subsystems": [ 00:08:14.654 { 00:08:14.654 "subsystem": "bdev", 00:08:14.654 "config": [ 00:08:14.654 { 00:08:14.654 "params": { 00:08:14.654 "trtype": "pcie", 00:08:14.654 "traddr": "0000:00:10.0", 00:08:14.654 "name": "Nvme0" 00:08:14.654 }, 00:08:14.654 "method": "bdev_nvme_attach_controller" 00:08:14.654 }, 00:08:14.654 { 00:08:14.654 "params": { 00:08:14.654 "trtype": "pcie", 00:08:14.654 "traddr": "0000:00:11.0", 00:08:14.654 "name": "Nvme1" 00:08:14.654 }, 00:08:14.654 "method": "bdev_nvme_attach_controller" 00:08:14.654 }, 00:08:14.654 { 00:08:14.654 "method": "bdev_wait_for_examine" 00:08:14.654 } 00:08:14.654 ] 00:08:14.654 } 00:08:14.654 ] 00:08:14.654 } 00:08:14.654 [2024-11-29 16:43:38.397264] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:14.654 [2024-11-29 16:43:38.426493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.913 [2024-11-29 16:43:38.449442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.913 [2024-11-29 16:43:38.478459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.913  [2024-11-29T16:43:38.964Z] Copying: 65/65 [MB] (average 955 MBps) 00:08:15.172 00:08:15.173 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:15.173 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:15.173 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:15.173 16:43:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:15.173 [2024-11-29 16:43:38.894842] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:15.173 [2024-11-29 16:43:38.895101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74900 ] 00:08:15.173 { 00:08:15.173 "subsystems": [ 00:08:15.173 { 00:08:15.173 "subsystem": "bdev", 00:08:15.173 "config": [ 00:08:15.173 { 00:08:15.173 "params": { 00:08:15.173 "trtype": "pcie", 00:08:15.173 "traddr": "0000:00:10.0", 00:08:15.173 "name": "Nvme0" 00:08:15.173 }, 00:08:15.173 "method": "bdev_nvme_attach_controller" 00:08:15.173 }, 00:08:15.173 { 00:08:15.173 "params": { 00:08:15.173 "trtype": "pcie", 00:08:15.173 "traddr": "0000:00:11.0", 00:08:15.173 "name": "Nvme1" 00:08:15.173 }, 00:08:15.173 "method": "bdev_nvme_attach_controller" 00:08:15.173 }, 00:08:15.173 { 00:08:15.173 "method": "bdev_wait_for_examine" 00:08:15.173 } 00:08:15.173 ] 00:08:15.173 } 00:08:15.173 ] 00:08:15.173 } 00:08:15.432 [2024-11-29 16:43:39.013708] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:15.432 [2024-11-29 16:43:39.035469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.432 [2024-11-29 16:43:39.053580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.432 [2024-11-29 16:43:39.080630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.691  [2024-11-29T16:43:39.483Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:15.691 00:08:15.691 16:43:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:15.691 16:43:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:15.691 16:43:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:15.691 16:43:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:15.691 16:43:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:15.691 16:43:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:15.691 16:43:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:15.691 { 00:08:15.691 "subsystems": [ 00:08:15.692 { 00:08:15.692 "subsystem": "bdev", 00:08:15.692 "config": [ 00:08:15.692 { 00:08:15.692 "params": { 00:08:15.692 "trtype": "pcie", 00:08:15.692 "traddr": "0000:00:10.0", 00:08:15.692 "name": "Nvme0" 00:08:15.692 }, 00:08:15.692 "method": "bdev_nvme_attach_controller" 00:08:15.692 }, 00:08:15.692 { 00:08:15.692 "params": { 00:08:15.692 "trtype": "pcie", 00:08:15.692 "traddr": "0000:00:11.0", 00:08:15.692 "name": "Nvme1" 00:08:15.692 }, 00:08:15.692 "method": "bdev_nvme_attach_controller" 00:08:15.692 }, 00:08:15.692 { 00:08:15.692 "method": "bdev_wait_for_examine" 00:08:15.692 } 00:08:15.692 ] 00:08:15.692 } 00:08:15.692 ] 00:08:15.692 } 00:08:15.692 [2024-11-29 16:43:39.405892] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:15.692 [2024-11-29 16:43:39.405984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74916 ] 00:08:15.950 [2024-11-29 16:43:39.530556] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:15.950 [2024-11-29 16:43:39.555841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.950 [2024-11-29 16:43:39.573767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.950 [2024-11-29 16:43:39.600751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.210  [2024-11-29T16:43:40.002Z] Copying: 65/65 [MB] (average 1031 MBps) 00:08:16.210 00:08:16.210 16:43:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:16.210 16:43:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:16.210 16:43:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:16.210 16:43:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:16.469 [2024-11-29 16:43:40.009491] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:16.470 [2024-11-29 16:43:40.009565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74931 ] 00:08:16.470 { 00:08:16.470 "subsystems": [ 00:08:16.470 { 00:08:16.470 "subsystem": "bdev", 00:08:16.470 "config": [ 00:08:16.470 { 00:08:16.470 "params": { 00:08:16.470 "trtype": "pcie", 00:08:16.470 "traddr": "0000:00:10.0", 00:08:16.470 "name": "Nvme0" 00:08:16.470 }, 00:08:16.470 "method": "bdev_nvme_attach_controller" 00:08:16.470 }, 00:08:16.470 { 00:08:16.470 "params": { 00:08:16.470 "trtype": "pcie", 00:08:16.470 "traddr": "0000:00:11.0", 00:08:16.470 "name": "Nvme1" 00:08:16.470 }, 00:08:16.470 "method": "bdev_nvme_attach_controller" 00:08:16.470 }, 00:08:16.470 { 00:08:16.470 "method": "bdev_wait_for_examine" 00:08:16.470 } 00:08:16.470 ] 00:08:16.470 } 00:08:16.470 ] 00:08:16.470 } 00:08:16.470 [2024-11-29 16:43:40.129268] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.470 [2024-11-29 16:43:40.152638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.470 [2024-11-29 16:43:40.170697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.470 [2024-11-29 16:43:40.198077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.729  [2024-11-29T16:43:40.521Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:16.729 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:16.729 00:08:16.729 real 0m2.237s 00:08:16.729 user 0m1.607s 00:08:16.729 sys 0m0.595s 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.729 ************************************ 00:08:16.729 END TEST dd_offset_magic 00:08:16.729 ************************************ 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:16.729 16:43:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:16.988 { 00:08:16.988 "subsystems": [ 00:08:16.988 { 00:08:16.988 "subsystem": "bdev", 00:08:16.988 "config": [ 00:08:16.988 { 00:08:16.988 "params": { 00:08:16.988 "trtype": "pcie", 00:08:16.988 "traddr": "0000:00:10.0", 00:08:16.988 "name": "Nvme0" 00:08:16.988 }, 00:08:16.988 "method": "bdev_nvme_attach_controller" 00:08:16.988 }, 00:08:16.988 { 00:08:16.988 "params": { 00:08:16.988 "trtype": "pcie", 00:08:16.988 "traddr": "0000:00:11.0", 00:08:16.988 "name": "Nvme1" 00:08:16.988 }, 00:08:16.988 "method": "bdev_nvme_attach_controller" 00:08:16.988 }, 00:08:16.988 { 00:08:16.988 "method": "bdev_wait_for_examine" 00:08:16.988 } 00:08:16.988 ] 00:08:16.988 } 00:08:16.988 ] 00:08:16.988 } 00:08:16.988 [2024-11-29 16:43:40.577563] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:16.988 [2024-11-29 16:43:40.577740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74962 ] 00:08:16.988 [2024-11-29 16:43:40.698299] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.988 [2024-11-29 16:43:40.720415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.988 [2024-11-29 16:43:40.740120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.988 [2024-11-29 16:43:40.768487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.247  [2024-11-29T16:43:41.039Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:17.247 00:08:17.247 16:43:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:17.247 16:43:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:17.247 16:43:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:17.247 16:43:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:17.247 16:43:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:17.247 16:43:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:17.506 16:43:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:17.506 16:43:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:17.506 16:43:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:17.506 16:43:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:17.506 [2024-11-29 16:43:41.090977] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:17.506 [2024-11-29 16:43:41.091072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74978 ] 00:08:17.506 { 00:08:17.506 "subsystems": [ 00:08:17.506 { 00:08:17.506 "subsystem": "bdev", 00:08:17.506 "config": [ 00:08:17.506 { 00:08:17.506 "params": { 00:08:17.507 "trtype": "pcie", 00:08:17.507 "traddr": "0000:00:10.0", 00:08:17.507 "name": "Nvme0" 00:08:17.507 }, 00:08:17.507 "method": "bdev_nvme_attach_controller" 00:08:17.507 }, 00:08:17.507 { 00:08:17.507 "params": { 00:08:17.507 "trtype": "pcie", 00:08:17.507 "traddr": "0000:00:11.0", 00:08:17.507 "name": "Nvme1" 00:08:17.507 }, 00:08:17.507 "method": "bdev_nvme_attach_controller" 00:08:17.507 }, 00:08:17.507 { 00:08:17.507 "method": "bdev_wait_for_examine" 00:08:17.507 } 00:08:17.507 ] 00:08:17.507 } 00:08:17.507 ] 00:08:17.507 } 00:08:17.507 [2024-11-29 16:43:41.215964] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:17.507 [2024-11-29 16:43:41.240219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.507 [2024-11-29 16:43:41.258490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.507 [2024-11-29 16:43:41.285666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.765  [2024-11-29T16:43:41.557Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:17.765 00:08:18.025 16:43:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:18.025 ************************************ 00:08:18.025 END TEST spdk_dd_bdev_to_bdev 00:08:18.025 ************************************ 00:08:18.025 00:08:18.025 real 0m5.873s 00:08:18.025 user 0m4.332s 00:08:18.025 sys 0m2.803s 00:08:18.025 16:43:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.025 16:43:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:18.025 16:43:41 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:18.025 16:43:41 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:18.025 16:43:41 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.025 16:43:41 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.025 16:43:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:18.025 ************************************ 00:08:18.025 START TEST spdk_dd_uring 00:08:18.025 ************************************ 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:18.025 * Looking for test storage... 00:08:18.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.025 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:18.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.285 --rc genhtml_branch_coverage=1 00:08:18.285 --rc genhtml_function_coverage=1 00:08:18.285 --rc genhtml_legend=1 00:08:18.285 --rc geninfo_all_blocks=1 00:08:18.285 --rc geninfo_unexecuted_blocks=1 00:08:18.285 00:08:18.285 ' 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:18.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.285 --rc genhtml_branch_coverage=1 00:08:18.285 --rc genhtml_function_coverage=1 00:08:18.285 --rc genhtml_legend=1 00:08:18.285 --rc geninfo_all_blocks=1 00:08:18.285 --rc geninfo_unexecuted_blocks=1 00:08:18.285 00:08:18.285 ' 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:18.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.285 --rc genhtml_branch_coverage=1 00:08:18.285 --rc genhtml_function_coverage=1 00:08:18.285 --rc genhtml_legend=1 00:08:18.285 --rc geninfo_all_blocks=1 00:08:18.285 --rc geninfo_unexecuted_blocks=1 00:08:18.285 00:08:18.285 ' 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:18.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.285 --rc genhtml_branch_coverage=1 00:08:18.285 --rc genhtml_function_coverage=1 00:08:18.285 --rc genhtml_legend=1 00:08:18.285 --rc geninfo_all_blocks=1 00:08:18.285 --rc geninfo_unexecuted_blocks=1 00:08:18.285 00:08:18.285 ' 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:18.285 ************************************ 00:08:18.285 START TEST dd_uring_copy 00:08:18.285 ************************************ 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=9uvi0lj89w66352lnhd329hrcr6hoehzkvvcii4t7bhskdz4dfazgsn88y0gkyw5scq7cqj57heazbpegtwllz8hp1awcua8ogfz9db0bw3l1rqxh1oqq6nj6t5kv91ccjv8o2424ropy4or0auq5apjrduhh1cp1fpl2xxllzmfjb3qkuqsn4xsqk9l8r1z9wyof921z4ogk6sgnzg8znyxz2t7tldv98zo26d0f4lfgpnbqua1hb8j9tgcc0nuc2nu8ogt2vmwnhfh4m0clibelmxwkj5xjmyxe9d5koowbyrd3be2lyajjmcio0o46gwvee20nhxvsh3ente6aykzny6ryzudm57c52y8xy8bx08s20ym1dvnuimo9osshv9t946tvvxaffbpvf98sh0ymmo530rm0wemppivvppzytbjd0jdx1zqa0abaf9tamkfmk8hu4m8a8ru4exseevgv2ibh9p0yajxkk7zuxb980vgf5b1uvpels7s3u10hy86zbs9fqkillj8yqwthl4uptj3fzd3qtbsivm9676v3np3d5dnm4wl4ko235oru8j52anse0snc7buus3vomxpn7bs1gorwvlo53mj413f152mqvi4efxi4mq1unl2uip5aj6vwyd6y6fu7wprhb7facszsexov4epx9312bik3w4u8f57z3q8wi8fwcl0n2jbwu28oesnztpwrsnhfkccdvd17htir3rzfuq7utnopfmjdpgcwgxdaph2194l4b8bm0jysoxrfsug274biem7eivpaozie7i3nwtpz4r6c8565p6fj0xby6xxmrvvdz4v2z9vw7bhebcslouuu4mgiw03ik4izyh93i6p6vtcnnxw89m2v8e16uwo03f6p4k3m4te7h2e7zjpxbe5xp45ssbnnmzwb67oxy1fhrah5iombp0u2crgn8ko0x5s0lcedlunseyovo1jpcd3le2lsw6bfktltynho0326uhp7i7jwr2rgsezukpylcen 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 9uvi0lj89w66352lnhd329hrcr6hoehzkvvcii4t7bhskdz4dfazgsn88y0gkyw5scq7cqj57heazbpegtwllz8hp1awcua8ogfz9db0bw3l1rqxh1oqq6nj6t5kv91ccjv8o2424ropy4or0auq5apjrduhh1cp1fpl2xxllzmfjb3qkuqsn4xsqk9l8r1z9wyof921z4ogk6sgnzg8znyxz2t7tldv98zo26d0f4lfgpnbqua1hb8j9tgcc0nuc2nu8ogt2vmwnhfh4m0clibelmxwkj5xjmyxe9d5koowbyrd3be2lyajjmcio0o46gwvee20nhxvsh3ente6aykzny6ryzudm57c52y8xy8bx08s20ym1dvnuimo9osshv9t946tvvxaffbpvf98sh0ymmo530rm0wemppivvppzytbjd0jdx1zqa0abaf9tamkfmk8hu4m8a8ru4exseevgv2ibh9p0yajxkk7zuxb980vgf5b1uvpels7s3u10hy86zbs9fqkillj8yqwthl4uptj3fzd3qtbsivm9676v3np3d5dnm4wl4ko235oru8j52anse0snc7buus3vomxpn7bs1gorwvlo53mj413f152mqvi4efxi4mq1unl2uip5aj6vwyd6y6fu7wprhb7facszsexov4epx9312bik3w4u8f57z3q8wi8fwcl0n2jbwu28oesnztpwrsnhfkccdvd17htir3rzfuq7utnopfmjdpgcwgxdaph2194l4b8bm0jysoxrfsug274biem7eivpaozie7i3nwtpz4r6c8565p6fj0xby6xxmrvvdz4v2z9vw7bhebcslouuu4mgiw03ik4izyh93i6p6vtcnnxw89m2v8e16uwo03f6p4k3m4te7h2e7zjpxbe5xp45ssbnnmzwb67oxy1fhrah5iombp0u2crgn8ko0x5s0lcedlunseyovo1jpcd3le2lsw6bfktltynho0326uhp7i7jwr2rgsezukpylcen 00:08:18.285 16:43:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:18.285 [2024-11-29 16:43:41.926954] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:18.286 [2024-11-29 16:43:41.927199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75050 ] 00:08:18.286 [2024-11-29 16:43:42.052493] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:18.545 [2024-11-29 16:43:42.083592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.545 [2024-11-29 16:43:42.101517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.545 [2024-11-29 16:43:42.128076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.113  [2024-11-29T16:43:42.905Z] Copying: 511/511 [MB] (average 1213 MBps) 00:08:19.113 00:08:19.113 16:43:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:19.113 16:43:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:19.113 16:43:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:19.113 16:43:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:19.372 [2024-11-29 16:43:42.938032] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:19.372 [2024-11-29 16:43:42.938124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75066 ] 00:08:19.372 { 00:08:19.372 "subsystems": [ 00:08:19.372 { 00:08:19.372 "subsystem": "bdev", 00:08:19.372 "config": [ 00:08:19.372 { 00:08:19.372 "params": { 00:08:19.372 "block_size": 512, 00:08:19.372 "num_blocks": 1048576, 00:08:19.372 "name": "malloc0" 00:08:19.372 }, 00:08:19.372 "method": "bdev_malloc_create" 00:08:19.372 }, 00:08:19.372 { 00:08:19.372 "params": { 00:08:19.372 "filename": "/dev/zram1", 00:08:19.372 "name": "uring0" 00:08:19.372 }, 00:08:19.372 "method": "bdev_uring_create" 00:08:19.372 }, 00:08:19.372 { 00:08:19.372 "method": "bdev_wait_for_examine" 00:08:19.372 } 00:08:19.372 ] 00:08:19.372 } 00:08:19.372 ] 00:08:19.372 } 00:08:19.372 [2024-11-29 16:43:43.062788] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:19.372 [2024-11-29 16:43:43.086469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.372 [2024-11-29 16:43:43.104489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.372 [2024-11-29 16:43:43.134599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.748  [2024-11-29T16:43:45.475Z] Copying: 238/512 [MB] (238 MBps) [2024-11-29T16:43:45.475Z] Copying: 477/512 [MB] (239 MBps) [2024-11-29T16:43:45.733Z] Copying: 512/512 [MB] (average 239 MBps) 00:08:21.941 00:08:21.941 16:43:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:21.941 16:43:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:21.941 16:43:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:21.941 16:43:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:21.941 [2024-11-29 16:43:45.634826] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:21.941 [2024-11-29 16:43:45.634910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75110 ] 00:08:21.941 { 00:08:21.941 "subsystems": [ 00:08:21.941 { 00:08:21.941 "subsystem": "bdev", 00:08:21.941 "config": [ 00:08:21.941 { 00:08:21.941 "params": { 00:08:21.941 "block_size": 512, 00:08:21.941 "num_blocks": 1048576, 00:08:21.941 "name": "malloc0" 00:08:21.941 }, 00:08:21.941 "method": "bdev_malloc_create" 00:08:21.941 }, 00:08:21.941 { 00:08:21.941 "params": { 00:08:21.941 "filename": "/dev/zram1", 00:08:21.941 "name": "uring0" 00:08:21.941 }, 00:08:21.941 "method": "bdev_uring_create" 00:08:21.941 }, 00:08:21.941 { 00:08:21.941 "method": "bdev_wait_for_examine" 00:08:21.941 } 00:08:21.941 ] 00:08:21.941 } 00:08:21.941 ] 00:08:21.941 } 00:08:22.201 [2024-11-29 16:43:45.752992] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:22.201 [2024-11-29 16:43:45.775961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.201 [2024-11-29 16:43:45.798128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.201 [2024-11-29 16:43:45.828506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.579  [2024-11-29T16:43:48.309Z] Copying: 184/512 [MB] (184 MBps) [2024-11-29T16:43:48.877Z] Copying: 369/512 [MB] (185 MBps) [2024-11-29T16:43:49.137Z] Copying: 512/512 [MB] (average 180 MBps) 00:08:25.345 00:08:25.345 16:43:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:25.345 16:43:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 9uvi0lj89w66352lnhd329hrcr6hoehzkvvcii4t7bhskdz4dfazgsn88y0gkyw5scq7cqj57heazbpegtwllz8hp1awcua8ogfz9db0bw3l1rqxh1oqq6nj6t5kv91ccjv8o2424ropy4or0auq5apjrduhh1cp1fpl2xxllzmfjb3qkuqsn4xsqk9l8r1z9wyof921z4ogk6sgnzg8znyxz2t7tldv98zo26d0f4lfgpnbqua1hb8j9tgcc0nuc2nu8ogt2vmwnhfh4m0clibelmxwkj5xjmyxe9d5koowbyrd3be2lyajjmcio0o46gwvee20nhxvsh3ente6aykzny6ryzudm57c52y8xy8bx08s20ym1dvnuimo9osshv9t946tvvxaffbpvf98sh0ymmo530rm0wemppivvppzytbjd0jdx1zqa0abaf9tamkfmk8hu4m8a8ru4exseevgv2ibh9p0yajxkk7zuxb980vgf5b1uvpels7s3u10hy86zbs9fqkillj8yqwthl4uptj3fzd3qtbsivm9676v3np3d5dnm4wl4ko235oru8j52anse0snc7buus3vomxpn7bs1gorwvlo53mj413f152mqvi4efxi4mq1unl2uip5aj6vwyd6y6fu7wprhb7facszsexov4epx9312bik3w4u8f57z3q8wi8fwcl0n2jbwu28oesnztpwrsnhfkccdvd17htir3rzfuq7utnopfmjdpgcwgxdaph2194l4b8bm0jysoxrfsug274biem7eivpaozie7i3nwtpz4r6c8565p6fj0xby6xxmrvvdz4v2z9vw7bhebcslouuu4mgiw03ik4izyh93i6p6vtcnnxw89m2v8e16uwo03f6p4k3m4te7h2e7zjpxbe5xp45ssbnnmzwb67oxy1fhrah5iombp0u2crgn8ko0x5s0lcedlunseyovo1jpcd3le2lsw6bfktltynho0326uhp7i7jwr2rgsezukpylcen == \9\u\v\i\0\l\j\8\9\w\6\6\3\5\2\l\n\h\d\3\2\9\h\r\c\r\6\h\o\e\h\z\k\v\v\c\i\i\4\t\7\b\h\s\k\d\z\4\d\f\a\z\g\s\n\8\8\y\0\g\k\y\w\5\s\c\q\7\c\q\j\5\7\h\e\a\z\b\p\e\g\t\w\l\l\z\8\h\p\1\a\w\c\u\a\8\o\g\f\z\9\d\b\0\b\w\3\l\1\r\q\x\h\1\o\q\q\6\n\j\6\t\5\k\v\9\1\c\c\j\v\8\o\2\4\2\4\r\o\p\y\4\o\r\0\a\u\q\5\a\p\j\r\d\u\h\h\1\c\p\1\f\p\l\2\x\x\l\l\z\m\f\j\b\3\q\k\u\q\s\n\4\x\s\q\k\9\l\8\r\1\z\9\w\y\o\f\9\2\1\z\4\o\g\k\6\s\g\n\z\g\8\z\n\y\x\z\2\t\7\t\l\d\v\9\8\z\o\2\6\d\0\f\4\l\f\g\p\n\b\q\u\a\1\h\b\8\j\9\t\g\c\c\0\n\u\c\2\n\u\8\o\g\t\2\v\m\w\n\h\f\h\4\m\0\c\l\i\b\e\l\m\x\w\k\j\5\x\j\m\y\x\e\9\d\5\k\o\o\w\b\y\r\d\3\b\e\2\l\y\a\j\j\m\c\i\o\0\o\4\6\g\w\v\e\e\2\0\n\h\x\v\s\h\3\e\n\t\e\6\a\y\k\z\n\y\6\r\y\z\u\d\m\5\7\c\5\2\y\8\x\y\8\b\x\0\8\s\2\0\y\m\1\d\v\n\u\i\m\o\9\o\s\s\h\v\9\t\9\4\6\t\v\v\x\a\f\f\b\p\v\f\9\8\s\h\0\y\m\m\o\5\3\0\r\m\0\w\e\m\p\p\i\v\v\p\p\z\y\t\b\j\d\0\j\d\x\1\z\q\a\0\a\b\a\f\9\t\a\m\k\f\m\k\8\h\u\4\m\8\a\8\r\u\4\e\x\s\e\e\v\g\v\2\i\b\h\9\p\0\y\a\j\x\k\k\7\z\u\x\b\9\8\0\v\g\f\5\b\1\u\v\p\e\l\s\7\s\3\u\1\0\h\y\8\6\z\b\s\9\f\q\k\i\l\l\j\8\y\q\w\t\h\l\4\u\p\t\j\3\f\z\d\3\q\t\b\s\i\v\m\9\6\7\6\v\3\n\p\3\d\5\d\n\m\4\w\l\4\k\o\2\3\5\o\r\u\8\j\5\2\a\n\s\e\0\s\n\c\7\b\u\u\s\3\v\o\m\x\p\n\7\b\s\1\g\o\r\w\v\l\o\5\3\m\j\4\1\3\f\1\5\2\m\q\v\i\4\e\f\x\i\4\m\q\1\u\n\l\2\u\i\p\5\a\j\6\v\w\y\d\6\y\6\f\u\7\w\p\r\h\b\7\f\a\c\s\z\s\e\x\o\v\4\e\p\x\9\3\1\2\b\i\k\3\w\4\u\8\f\5\7\z\3\q\8\w\i\8\f\w\c\l\0\n\2\j\b\w\u\2\8\o\e\s\n\z\t\p\w\r\s\n\h\f\k\c\c\d\v\d\1\7\h\t\i\r\3\r\z\f\u\q\7\u\t\n\o\p\f\m\j\d\p\g\c\w\g\x\d\a\p\h\2\1\9\4\l\4\b\8\b\m\0\j\y\s\o\x\r\f\s\u\g\2\7\4\b\i\e\m\7\e\i\v\p\a\o\z\i\e\7\i\3\n\w\t\p\z\4\r\6\c\8\5\6\5\p\6\f\j\0\x\b\y\6\x\x\m\r\v\v\d\z\4\v\2\z\9\v\w\7\b\h\e\b\c\s\l\o\u\u\u\4\m\g\i\w\0\3\i\k\4\i\z\y\h\9\3\i\6\p\6\v\t\c\n\n\x\w\8\9\m\2\v\8\e\1\6\u\w\o\0\3\f\6\p\4\k\3\m\4\t\e\7\h\2\e\7\z\j\p\x\b\e\5\x\p\4\5\s\s\b\n\n\m\z\w\b\6\7\o\x\y\1\f\h\r\a\h\5\i\o\m\b\p\0\u\2\c\r\g\n\8\k\o\0\x\5\s\0\l\c\e\d\l\u\n\s\e\y\o\v\o\1\j\p\c\d\3\l\e\2\l\s\w\6\b\f\k\t\l\t\y\n\h\o\0\3\2\6\u\h\p\7\i\7\j\w\r\2\r\g\s\e\z\u\k\p\y\l\c\e\n ]] 00:08:25.345 16:43:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:25.345 16:43:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 9uvi0lj89w66352lnhd329hrcr6hoehzkvvcii4t7bhskdz4dfazgsn88y0gkyw5scq7cqj57heazbpegtwllz8hp1awcua8ogfz9db0bw3l1rqxh1oqq6nj6t5kv91ccjv8o2424ropy4or0auq5apjrduhh1cp1fpl2xxllzmfjb3qkuqsn4xsqk9l8r1z9wyof921z4ogk6sgnzg8znyxz2t7tldv98zo26d0f4lfgpnbqua1hb8j9tgcc0nuc2nu8ogt2vmwnhfh4m0clibelmxwkj5xjmyxe9d5koowbyrd3be2lyajjmcio0o46gwvee20nhxvsh3ente6aykzny6ryzudm57c52y8xy8bx08s20ym1dvnuimo9osshv9t946tvvxaffbpvf98sh0ymmo530rm0wemppivvppzytbjd0jdx1zqa0abaf9tamkfmk8hu4m8a8ru4exseevgv2ibh9p0yajxkk7zuxb980vgf5b1uvpels7s3u10hy86zbs9fqkillj8yqwthl4uptj3fzd3qtbsivm9676v3np3d5dnm4wl4ko235oru8j52anse0snc7buus3vomxpn7bs1gorwvlo53mj413f152mqvi4efxi4mq1unl2uip5aj6vwyd6y6fu7wprhb7facszsexov4epx9312bik3w4u8f57z3q8wi8fwcl0n2jbwu28oesnztpwrsnhfkccdvd17htir3rzfuq7utnopfmjdpgcwgxdaph2194l4b8bm0jysoxrfsug274biem7eivpaozie7i3nwtpz4r6c8565p6fj0xby6xxmrvvdz4v2z9vw7bhebcslouuu4mgiw03ik4izyh93i6p6vtcnnxw89m2v8e16uwo03f6p4k3m4te7h2e7zjpxbe5xp45ssbnnmzwb67oxy1fhrah5iombp0u2crgn8ko0x5s0lcedlunseyovo1jpcd3le2lsw6bfktltynho0326uhp7i7jwr2rgsezukpylcen == \9\u\v\i\0\l\j\8\9\w\6\6\3\5\2\l\n\h\d\3\2\9\h\r\c\r\6\h\o\e\h\z\k\v\v\c\i\i\4\t\7\b\h\s\k\d\z\4\d\f\a\z\g\s\n\8\8\y\0\g\k\y\w\5\s\c\q\7\c\q\j\5\7\h\e\a\z\b\p\e\g\t\w\l\l\z\8\h\p\1\a\w\c\u\a\8\o\g\f\z\9\d\b\0\b\w\3\l\1\r\q\x\h\1\o\q\q\6\n\j\6\t\5\k\v\9\1\c\c\j\v\8\o\2\4\2\4\r\o\p\y\4\o\r\0\a\u\q\5\a\p\j\r\d\u\h\h\1\c\p\1\f\p\l\2\x\x\l\l\z\m\f\j\b\3\q\k\u\q\s\n\4\x\s\q\k\9\l\8\r\1\z\9\w\y\o\f\9\2\1\z\4\o\g\k\6\s\g\n\z\g\8\z\n\y\x\z\2\t\7\t\l\d\v\9\8\z\o\2\6\d\0\f\4\l\f\g\p\n\b\q\u\a\1\h\b\8\j\9\t\g\c\c\0\n\u\c\2\n\u\8\o\g\t\2\v\m\w\n\h\f\h\4\m\0\c\l\i\b\e\l\m\x\w\k\j\5\x\j\m\y\x\e\9\d\5\k\o\o\w\b\y\r\d\3\b\e\2\l\y\a\j\j\m\c\i\o\0\o\4\6\g\w\v\e\e\2\0\n\h\x\v\s\h\3\e\n\t\e\6\a\y\k\z\n\y\6\r\y\z\u\d\m\5\7\c\5\2\y\8\x\y\8\b\x\0\8\s\2\0\y\m\1\d\v\n\u\i\m\o\9\o\s\s\h\v\9\t\9\4\6\t\v\v\x\a\f\f\b\p\v\f\9\8\s\h\0\y\m\m\o\5\3\0\r\m\0\w\e\m\p\p\i\v\v\p\p\z\y\t\b\j\d\0\j\d\x\1\z\q\a\0\a\b\a\f\9\t\a\m\k\f\m\k\8\h\u\4\m\8\a\8\r\u\4\e\x\s\e\e\v\g\v\2\i\b\h\9\p\0\y\a\j\x\k\k\7\z\u\x\b\9\8\0\v\g\f\5\b\1\u\v\p\e\l\s\7\s\3\u\1\0\h\y\8\6\z\b\s\9\f\q\k\i\l\l\j\8\y\q\w\t\h\l\4\u\p\t\j\3\f\z\d\3\q\t\b\s\i\v\m\9\6\7\6\v\3\n\p\3\d\5\d\n\m\4\w\l\4\k\o\2\3\5\o\r\u\8\j\5\2\a\n\s\e\0\s\n\c\7\b\u\u\s\3\v\o\m\x\p\n\7\b\s\1\g\o\r\w\v\l\o\5\3\m\j\4\1\3\f\1\5\2\m\q\v\i\4\e\f\x\i\4\m\q\1\u\n\l\2\u\i\p\5\a\j\6\v\w\y\d\6\y\6\f\u\7\w\p\r\h\b\7\f\a\c\s\z\s\e\x\o\v\4\e\p\x\9\3\1\2\b\i\k\3\w\4\u\8\f\5\7\z\3\q\8\w\i\8\f\w\c\l\0\n\2\j\b\w\u\2\8\o\e\s\n\z\t\p\w\r\s\n\h\f\k\c\c\d\v\d\1\7\h\t\i\r\3\r\z\f\u\q\7\u\t\n\o\p\f\m\j\d\p\g\c\w\g\x\d\a\p\h\2\1\9\4\l\4\b\8\b\m\0\j\y\s\o\x\r\f\s\u\g\2\7\4\b\i\e\m\7\e\i\v\p\a\o\z\i\e\7\i\3\n\w\t\p\z\4\r\6\c\8\5\6\5\p\6\f\j\0\x\b\y\6\x\x\m\r\v\v\d\z\4\v\2\z\9\v\w\7\b\h\e\b\c\s\l\o\u\u\u\4\m\g\i\w\0\3\i\k\4\i\z\y\h\9\3\i\6\p\6\v\t\c\n\n\x\w\8\9\m\2\v\8\e\1\6\u\w\o\0\3\f\6\p\4\k\3\m\4\t\e\7\h\2\e\7\z\j\p\x\b\e\5\x\p\4\5\s\s\b\n\n\m\z\w\b\6\7\o\x\y\1\f\h\r\a\h\5\i\o\m\b\p\0\u\2\c\r\g\n\8\k\o\0\x\5\s\0\l\c\e\d\l\u\n\s\e\y\o\v\o\1\j\p\c\d\3\l\e\2\l\s\w\6\b\f\k\t\l\t\y\n\h\o\0\3\2\6\u\h\p\7\i\7\j\w\r\2\r\g\s\e\z\u\k\p\y\l\c\e\n ]] 00:08:25.345 16:43:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:25.621 16:43:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:25.621 16:43:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:25.621 16:43:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:25.621 16:43:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:25.621 [2024-11-29 16:43:49.365124] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:25.621 [2024-11-29 16:43:49.365414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75166 ] 00:08:25.621 { 00:08:25.621 "subsystems": [ 00:08:25.621 { 00:08:25.621 "subsystem": "bdev", 00:08:25.621 "config": [ 00:08:25.621 { 00:08:25.621 "params": { 00:08:25.621 "block_size": 512, 00:08:25.621 "num_blocks": 1048576, 00:08:25.621 "name": "malloc0" 00:08:25.621 }, 00:08:25.621 "method": "bdev_malloc_create" 00:08:25.621 }, 00:08:25.621 { 00:08:25.621 "params": { 00:08:25.621 "filename": "/dev/zram1", 00:08:25.621 "name": "uring0" 00:08:25.621 }, 00:08:25.621 "method": "bdev_uring_create" 00:08:25.621 }, 00:08:25.621 { 00:08:25.621 "method": "bdev_wait_for_examine" 00:08:25.621 } 00:08:25.621 ] 00:08:25.621 } 00:08:25.621 ] 00:08:25.621 } 00:08:25.881 [2024-11-29 16:43:49.484121] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:25.881 [2024-11-29 16:43:49.508542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.881 [2024-11-29 16:43:49.526510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.881 [2024-11-29 16:43:49.553510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.258  [2024-11-29T16:43:51.987Z] Copying: 165/512 [MB] (165 MBps) [2024-11-29T16:43:52.924Z] Copying: 331/512 [MB] (165 MBps) [2024-11-29T16:43:52.924Z] Copying: 495/512 [MB] (164 MBps) [2024-11-29T16:43:53.184Z] Copying: 512/512 [MB] (average 165 MBps) 00:08:29.392 00:08:29.392 16:43:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:29.392 16:43:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:29.392 16:43:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:29.392 16:43:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:29.392 16:43:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:29.392 16:43:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:29.392 16:43:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:29.392 16:43:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:29.392 { 00:08:29.392 "subsystems": [ 00:08:29.392 { 00:08:29.392 "subsystem": "bdev", 00:08:29.392 "config": [ 00:08:29.392 { 00:08:29.392 "params": { 00:08:29.392 "block_size": 512, 00:08:29.392 "num_blocks": 1048576, 00:08:29.392 "name": "malloc0" 00:08:29.392 }, 00:08:29.392 "method": "bdev_malloc_create" 00:08:29.392 }, 00:08:29.392 { 00:08:29.392 "params": { 00:08:29.392 "filename": "/dev/zram1", 00:08:29.392 "name": "uring0" 00:08:29.392 }, 00:08:29.392 "method": "bdev_uring_create" 00:08:29.392 }, 00:08:29.392 { 00:08:29.392 "params": { 00:08:29.392 "name": "uring0" 00:08:29.392 }, 00:08:29.392 "method": "bdev_uring_delete" 00:08:29.392 }, 00:08:29.392 { 00:08:29.392 "method": "bdev_wait_for_examine" 00:08:29.392 } 00:08:29.392 ] 00:08:29.392 } 00:08:29.392 ] 00:08:29.392 } 00:08:29.392 [2024-11-29 16:43:53.025681] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:29.392 [2024-11-29 16:43:53.025794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75216 ] 00:08:29.392 [2024-11-29 16:43:53.150927] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:29.392 [2024-11-29 16:43:53.177543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.651 [2024-11-29 16:43:53.197981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.651 [2024-11-29 16:43:53.224960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.651  [2024-11-29T16:43:53.702Z] Copying: 0/0 [B] (average 0 Bps) 00:08:29.910 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.910 16:43:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:29.910 { 00:08:29.910 "subsystems": [ 00:08:29.910 { 00:08:29.910 "subsystem": "bdev", 00:08:29.910 "config": [ 00:08:29.910 { 00:08:29.910 "params": { 00:08:29.910 "block_size": 512, 00:08:29.910 "num_blocks": 1048576, 00:08:29.910 "name": "malloc0" 00:08:29.910 }, 00:08:29.910 "method": "bdev_malloc_create" 00:08:29.910 }, 00:08:29.910 { 00:08:29.910 "params": { 00:08:29.910 "filename": "/dev/zram1", 00:08:29.910 "name": "uring0" 00:08:29.910 }, 00:08:29.910 "method": "bdev_uring_create" 00:08:29.910 }, 00:08:29.910 { 00:08:29.910 "params": { 00:08:29.910 "name": "uring0" 00:08:29.910 }, 00:08:29.910 "method": "bdev_uring_delete" 00:08:29.910 }, 00:08:29.910 { 00:08:29.910 "method": "bdev_wait_for_examine" 00:08:29.910 } 00:08:29.910 ] 00:08:29.910 } 00:08:29.910 ] 00:08:29.910 } 00:08:29.910 [2024-11-29 16:43:53.601631] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:29.910 [2024-11-29 16:43:53.601718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75240 ] 00:08:30.170 [2024-11-29 16:43:53.725956] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:30.170 [2024-11-29 16:43:53.750717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.170 [2024-11-29 16:43:53.768491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.170 [2024-11-29 16:43:53.795600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.170 [2024-11-29 16:43:53.918799] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:30.170 [2024-11-29 16:43:53.918860] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:30.170 [2024-11-29 16:43:53.918870] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:30.170 [2024-11-29 16:43:53.918879] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:30.430 [2024-11-29 16:43:54.092942] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:30.430 00:08:30.430 real 0m12.331s 00:08:30.430 user 0m8.442s 00:08:30.430 sys 0m10.755s 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.430 ************************************ 00:08:30.430 END TEST dd_uring_copy 00:08:30.430 ************************************ 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:30.430 00:08:30.430 real 0m12.576s 00:08:30.430 user 0m8.597s 00:08:30.430 sys 0m10.842s 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.430 16:43:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:30.430 ************************************ 00:08:30.430 END TEST spdk_dd_uring 00:08:30.430 ************************************ 00:08:30.690 16:43:54 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:30.690 16:43:54 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.690 16:43:54 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.690 16:43:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:30.690 ************************************ 00:08:30.690 START TEST spdk_dd_sparse 00:08:30.690 ************************************ 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:30.690 * Looking for test storage... 00:08:30.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:30.690 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:30.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.691 --rc genhtml_branch_coverage=1 00:08:30.691 --rc genhtml_function_coverage=1 00:08:30.691 --rc genhtml_legend=1 00:08:30.691 --rc geninfo_all_blocks=1 00:08:30.691 --rc geninfo_unexecuted_blocks=1 00:08:30.691 00:08:30.691 ' 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:30.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.691 --rc genhtml_branch_coverage=1 00:08:30.691 --rc genhtml_function_coverage=1 00:08:30.691 --rc genhtml_legend=1 00:08:30.691 --rc geninfo_all_blocks=1 00:08:30.691 --rc geninfo_unexecuted_blocks=1 00:08:30.691 00:08:30.691 ' 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:30.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.691 --rc genhtml_branch_coverage=1 00:08:30.691 --rc genhtml_function_coverage=1 00:08:30.691 --rc genhtml_legend=1 00:08:30.691 --rc geninfo_all_blocks=1 00:08:30.691 --rc geninfo_unexecuted_blocks=1 00:08:30.691 00:08:30.691 ' 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:30.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.691 --rc genhtml_branch_coverage=1 00:08:30.691 --rc genhtml_function_coverage=1 00:08:30.691 --rc genhtml_legend=1 00:08:30.691 --rc geninfo_all_blocks=1 00:08:30.691 --rc geninfo_unexecuted_blocks=1 00:08:30.691 00:08:30.691 ' 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:30.691 16:43:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:30.951 1+0 records in 00:08:30.951 1+0 records out 00:08:30.951 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00685937 s, 611 MB/s 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:30.951 1+0 records in 00:08:30.951 1+0 records out 00:08:30.951 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00400373 s, 1.0 GB/s 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:30.951 1+0 records in 00:08:30.951 1+0 records out 00:08:30.951 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00681727 s, 615 MB/s 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:30.951 ************************************ 00:08:30.951 START TEST dd_sparse_file_to_file 00:08:30.951 ************************************ 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:30.951 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:30.951 [2024-11-29 16:43:54.571720] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:30.951 [2024-11-29 16:43:54.571826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75334 ] 00:08:30.951 { 00:08:30.951 "subsystems": [ 00:08:30.951 { 00:08:30.951 "subsystem": "bdev", 00:08:30.951 "config": [ 00:08:30.951 { 00:08:30.951 "params": { 00:08:30.951 "block_size": 4096, 00:08:30.951 "filename": "dd_sparse_aio_disk", 00:08:30.951 "name": "dd_aio" 00:08:30.951 }, 00:08:30.951 "method": "bdev_aio_create" 00:08:30.951 }, 00:08:30.951 { 00:08:30.951 "params": { 00:08:30.951 "lvs_name": "dd_lvstore", 00:08:30.951 "bdev_name": "dd_aio" 00:08:30.951 }, 00:08:30.951 "method": "bdev_lvol_create_lvstore" 00:08:30.951 }, 00:08:30.951 { 00:08:30.951 "method": "bdev_wait_for_examine" 00:08:30.951 } 00:08:30.951 ] 00:08:30.951 } 00:08:30.951 ] 00:08:30.951 } 00:08:30.951 [2024-11-29 16:43:54.696821] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:30.951 [2024-11-29 16:43:54.725665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.211 [2024-11-29 16:43:54.744561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.211 [2024-11-29 16:43:54.772651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.211  [2024-11-29T16:43:55.003Z] Copying: 12/36 [MB] (average 1000 MBps) 00:08:31.211 00:08:31.211 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:31.211 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:31.211 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:31.211 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:31.211 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:31.211 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:31.211 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:31.211 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:31.211 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:31.211 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:31.211 00:08:31.211 real 0m0.484s 00:08:31.211 user 0m0.288s 00:08:31.211 sys 0m0.238s 00:08:31.211 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.211 16:43:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:31.211 ************************************ 00:08:31.211 END TEST dd_sparse_file_to_file 00:08:31.211 ************************************ 00:08:31.471 16:43:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:31.471 16:43:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.471 16:43:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.471 16:43:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:31.471 ************************************ 00:08:31.471 START TEST dd_sparse_file_to_bdev 00:08:31.471 ************************************ 00:08:31.471 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:08:31.471 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:31.471 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:31.471 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:31.471 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:31.471 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:31.472 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:31.472 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:31.472 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:31.472 [2024-11-29 16:43:55.099360] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:31.472 [2024-11-29 16:43:55.099483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75375 ] 00:08:31.472 { 00:08:31.472 "subsystems": [ 00:08:31.472 { 00:08:31.472 "subsystem": "bdev", 00:08:31.472 "config": [ 00:08:31.472 { 00:08:31.472 "params": { 00:08:31.472 "block_size": 4096, 00:08:31.472 "filename": "dd_sparse_aio_disk", 00:08:31.472 "name": "dd_aio" 00:08:31.472 }, 00:08:31.472 "method": "bdev_aio_create" 00:08:31.472 }, 00:08:31.472 { 00:08:31.472 "params": { 00:08:31.472 "lvs_name": "dd_lvstore", 00:08:31.472 "lvol_name": "dd_lvol", 00:08:31.472 "size_in_mib": 36, 00:08:31.472 "thin_provision": true 00:08:31.472 }, 00:08:31.472 "method": "bdev_lvol_create" 00:08:31.472 }, 00:08:31.472 { 00:08:31.472 "method": "bdev_wait_for_examine" 00:08:31.472 } 00:08:31.472 ] 00:08:31.472 } 00:08:31.472 ] 00:08:31.472 } 00:08:31.472 [2024-11-29 16:43:55.226013] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.472 [2024-11-29 16:43:55.253014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.730 [2024-11-29 16:43:55.275107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.730 [2024-11-29 16:43:55.310942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.730  [2024-11-29T16:43:55.522Z] Copying: 12/36 [MB] (average 521 MBps) 00:08:31.730 00:08:31.730 00:08:31.730 real 0m0.462s 00:08:31.730 user 0m0.307s 00:08:31.730 sys 0m0.228s 00:08:31.730 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.730 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:31.730 ************************************ 00:08:31.730 END TEST dd_sparse_file_to_bdev 00:08:31.730 ************************************ 00:08:31.988 16:43:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:31.988 16:43:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.988 16:43:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.988 16:43:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:31.988 ************************************ 00:08:31.988 START TEST dd_sparse_bdev_to_file 00:08:31.988 ************************************ 00:08:31.988 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:08:31.988 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:31.988 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:31.988 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:31.988 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:31.988 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:31.988 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:31.988 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:31.988 16:43:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:31.988 { 00:08:31.988 "subsystems": [ 00:08:31.988 { 00:08:31.988 "subsystem": "bdev", 00:08:31.988 "config": [ 00:08:31.988 { 00:08:31.988 "params": { 00:08:31.988 "block_size": 4096, 00:08:31.988 "filename": "dd_sparse_aio_disk", 00:08:31.988 "name": "dd_aio" 00:08:31.988 }, 00:08:31.988 "method": "bdev_aio_create" 00:08:31.988 }, 00:08:31.988 { 00:08:31.988 "method": "bdev_wait_for_examine" 00:08:31.988 } 00:08:31.988 ] 00:08:31.988 } 00:08:31.988 ] 00:08:31.988 } 00:08:31.988 [2024-11-29 16:43:55.616933] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:31.988 [2024-11-29 16:43:55.617041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75412 ] 00:08:31.988 [2024-11-29 16:43:55.742005] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.988 [2024-11-29 16:43:55.768160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.248 [2024-11-29 16:43:55.787120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.248 [2024-11-29 16:43:55.814477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.248  [2024-11-29T16:43:56.040Z] Copying: 12/36 [MB] (average 1090 MBps) 00:08:32.248 00:08:32.248 16:43:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:32.248 16:43:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:32.248 16:43:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:32.248 16:43:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:32.248 16:43:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:32.248 16:43:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:32.248 16:43:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:32.248 16:43:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:32.248 16:43:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:32.248 16:43:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:32.248 00:08:32.248 real 0m0.473s 00:08:32.248 user 0m0.282s 00:08:32.248 sys 0m0.244s 00:08:32.248 16:43:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.248 16:43:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:32.248 ************************************ 00:08:32.248 END TEST dd_sparse_bdev_to_file 00:08:32.248 ************************************ 00:08:32.508 16:43:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:32.508 16:43:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:32.508 16:43:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:32.508 16:43:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:32.508 16:43:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:32.508 00:08:32.508 real 0m1.838s 00:08:32.508 user 0m1.058s 00:08:32.508 sys 0m0.924s 00:08:32.508 16:43:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.508 ************************************ 00:08:32.508 END TEST spdk_dd_sparse 00:08:32.508 16:43:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:32.508 ************************************ 00:08:32.508 16:43:56 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:32.508 16:43:56 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.508 16:43:56 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.508 16:43:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:32.508 ************************************ 00:08:32.508 START TEST spdk_dd_negative 00:08:32.508 ************************************ 00:08:32.508 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:32.508 * Looking for test storage... 00:08:32.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:32.508 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:32.508 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:08:32.508 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:32.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.770 --rc genhtml_branch_coverage=1 00:08:32.770 --rc genhtml_function_coverage=1 00:08:32.770 --rc genhtml_legend=1 00:08:32.770 --rc geninfo_all_blocks=1 00:08:32.770 --rc geninfo_unexecuted_blocks=1 00:08:32.770 00:08:32.770 ' 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:32.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.770 --rc genhtml_branch_coverage=1 00:08:32.770 --rc genhtml_function_coverage=1 00:08:32.770 --rc genhtml_legend=1 00:08:32.770 --rc geninfo_all_blocks=1 00:08:32.770 --rc geninfo_unexecuted_blocks=1 00:08:32.770 00:08:32.770 ' 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:32.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.770 --rc genhtml_branch_coverage=1 00:08:32.770 --rc genhtml_function_coverage=1 00:08:32.770 --rc genhtml_legend=1 00:08:32.770 --rc geninfo_all_blocks=1 00:08:32.770 --rc geninfo_unexecuted_blocks=1 00:08:32.770 00:08:32.770 ' 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:32.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.770 --rc genhtml_branch_coverage=1 00:08:32.770 --rc genhtml_function_coverage=1 00:08:32.770 --rc genhtml_legend=1 00:08:32.770 --rc geninfo_all_blocks=1 00:08:32.770 --rc geninfo_unexecuted_blocks=1 00:08:32.770 00:08:32.770 ' 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.770 16:43:56 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.771 ************************************ 00:08:32.771 START TEST dd_invalid_arguments 00:08:32.771 ************************************ 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.771 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:32.771 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:32.771 00:08:32.771 CPU options: 00:08:32.771 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:32.771 (like [0,1,10]) 00:08:32.771 --lcores lcore to CPU mapping list. The list is in the format: 00:08:32.771 [<,lcores[@CPUs]>...] 00:08:32.771 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:32.771 Within the group, '-' is used for range separator, 00:08:32.771 ',' is used for single number separator. 00:08:32.771 '( )' can be omitted for single element group, 00:08:32.771 '@' can be omitted if cpus and lcores have the same value 00:08:32.771 --disable-cpumask-locks Disable CPU core lock files. 00:08:32.771 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:32.771 pollers in the app support interrupt mode) 00:08:32.771 -p, --main-core main (primary) core for DPDK 00:08:32.771 00:08:32.771 Configuration options: 00:08:32.771 -c, --config, --json JSON config file 00:08:32.771 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:32.771 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:32.771 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:32.771 --rpcs-allowed comma-separated list of permitted RPCS 00:08:32.771 --json-ignore-init-errors don't exit on invalid config entry 00:08:32.771 00:08:32.771 Memory options: 00:08:32.771 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:32.771 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:32.771 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:32.771 -R, --huge-unlink unlink huge files after initialization 00:08:32.771 -n, --mem-channels number of memory channels used for DPDK 00:08:32.771 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:32.771 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:32.771 --no-huge run without using hugepages 00:08:32.771 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:32.771 -i, --shm-id shared memory ID (optional) 00:08:32.771 -g, --single-file-segments force creating just one hugetlbfs file 00:08:32.771 00:08:32.771 PCI options: 00:08:32.771 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:32.771 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:32.771 -u, --no-pci disable PCI access 00:08:32.771 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:32.771 00:08:32.771 Log options: 00:08:32.771 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:32.771 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:32.771 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:32.771 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:32.771 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:08:32.771 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:08:32.771 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:08:32.771 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:08:32.771 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:08:32.771 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:08:32.771 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:32.771 --silence-noticelog disable notice level logging to stderr 00:08:32.771 00:08:32.771 Trace options: 00:08:32.771 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:32.771 setting 0 to disable trace (default 32768) 00:08:32.771 Tracepoints vary in size and can use more than one trace entry. 00:08:32.771 -e, --tpoint-group [:] 00:08:32.771 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:32.771 [2024-11-29 16:43:56.407095] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:32.771 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:32.771 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:08:32.771 bdev_raid, scheduler, all). 00:08:32.771 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:32.771 a tracepoint group. First tpoint inside a group can be enabled by 00:08:32.771 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:32.772 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:32.772 in /include/spdk_internal/trace_defs.h 00:08:32.772 00:08:32.772 Other options: 00:08:32.772 -h, --help show this usage 00:08:32.772 -v, --version print SPDK version 00:08:32.772 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:32.772 --env-context Opaque context for use of the env implementation 00:08:32.772 00:08:32.772 Application specific: 00:08:32.772 [--------- DD Options ---------] 00:08:32.772 --if Input file. Must specify either --if or --ib. 00:08:32.772 --ib Input bdev. Must specifier either --if or --ib 00:08:32.772 --of Output file. Must specify either --of or --ob. 00:08:32.772 --ob Output bdev. Must specify either --of or --ob. 00:08:32.772 --iflag Input file flags. 00:08:32.772 --oflag Output file flags. 00:08:32.772 --bs I/O unit size (default: 4096) 00:08:32.772 --qd Queue depth (default: 2) 00:08:32.772 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:32.772 --skip Skip this many I/O units at start of input. (default: 0) 00:08:32.772 --seek Skip this many I/O units at start of output. (default: 0) 00:08:32.772 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:32.772 --sparse Enable hole skipping in input target 00:08:32.772 Available iflag and oflag values: 00:08:32.772 append - append mode 00:08:32.772 direct - use direct I/O for data 00:08:32.772 directory - fail unless a directory 00:08:32.772 dsync - use synchronized I/O for data 00:08:32.772 noatime - do not update access time 00:08:32.772 noctty - do not assign controlling terminal from file 00:08:32.772 nofollow - do not follow symlinks 00:08:32.772 nonblock - use non-blocking I/O 00:08:32.772 sync - use synchronized I/O for data and metadata 00:08:32.772 ************************************ 00:08:32.772 END TEST dd_invalid_arguments 00:08:32.772 ************************************ 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.772 00:08:32.772 real 0m0.078s 00:08:32.772 user 0m0.049s 00:08:32.772 sys 0m0.026s 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.772 ************************************ 00:08:32.772 START TEST dd_double_input 00:08:32.772 ************************************ 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.772 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:32.772 [2024-11-29 16:43:56.541905] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.032 00:08:33.032 real 0m0.080s 00:08:33.032 user 0m0.045s 00:08:33.032 sys 0m0.033s 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.032 ************************************ 00:08:33.032 END TEST dd_double_input 00:08:33.032 ************************************ 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.032 ************************************ 00:08:33.032 START TEST dd_double_output 00:08:33.032 ************************************ 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.032 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:33.033 [2024-11-29 16:43:56.673352] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.033 ************************************ 00:08:33.033 END TEST dd_double_output 00:08:33.033 ************************************ 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.033 00:08:33.033 real 0m0.076s 00:08:33.033 user 0m0.051s 00:08:33.033 sys 0m0.025s 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.033 ************************************ 00:08:33.033 START TEST dd_no_input 00:08:33.033 ************************************ 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:33.033 [2024-11-29 16:43:56.802960] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.033 00:08:33.033 real 0m0.077s 00:08:33.033 user 0m0.053s 00:08:33.033 sys 0m0.023s 00:08:33.033 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.293 ************************************ 00:08:33.293 END TEST dd_no_input 00:08:33.293 ************************************ 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.293 ************************************ 00:08:33.293 START TEST dd_no_output 00:08:33.293 ************************************ 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:33.293 [2024-11-29 16:43:56.920564] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.293 00:08:33.293 real 0m0.062s 00:08:33.293 user 0m0.038s 00:08:33.293 sys 0m0.024s 00:08:33.293 ************************************ 00:08:33.293 END TEST dd_no_output 00:08:33.293 ************************************ 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:33.293 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.294 ************************************ 00:08:33.294 START TEST dd_wrong_blocksize 00:08:33.294 ************************************ 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.294 16:43:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:33.294 [2024-11-29 16:43:57.036252] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:33.294 16:43:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:08:33.294 16:43:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.294 16:43:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.294 16:43:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.294 00:08:33.294 real 0m0.061s 00:08:33.294 user 0m0.041s 00:08:33.294 sys 0m0.019s 00:08:33.294 16:43:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.294 ************************************ 00:08:33.294 END TEST dd_wrong_blocksize 00:08:33.294 ************************************ 00:08:33.294 16:43:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.554 ************************************ 00:08:33.554 START TEST dd_smaller_blocksize 00:08:33.554 ************************************ 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.554 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:33.554 [2024-11-29 16:43:57.149867] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:33.554 [2024-11-29 16:43:57.149951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75633 ] 00:08:33.554 [2024-11-29 16:43:57.271634] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:33.554 [2024-11-29 16:43:57.304994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.554 [2024-11-29 16:43:57.330509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.814 [2024-11-29 16:43:57.365622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.814 [2024-11-29 16:43:57.386542] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:33.814 [2024-11-29 16:43:57.386647] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.814 [2024-11-29 16:43:57.457550] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:08:33.814 ************************************ 00:08:33.814 END TEST dd_smaller_blocksize 00:08:33.814 ************************************ 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.814 00:08:33.814 real 0m0.408s 00:08:33.814 user 0m0.201s 00:08:33.814 sys 0m0.104s 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.814 ************************************ 00:08:33.814 START TEST dd_invalid_count 00:08:33.814 ************************************ 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.814 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:34.074 [2024-11-29 16:43:57.624407] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.074 00:08:34.074 real 0m0.079s 00:08:34.074 user 0m0.054s 00:08:34.074 sys 0m0.024s 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.074 ************************************ 00:08:34.074 END TEST dd_invalid_count 00:08:34.074 ************************************ 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.074 ************************************ 00:08:34.074 START TEST dd_invalid_oflag 00:08:34.074 ************************************ 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.074 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:34.075 [2024-11-29 16:43:57.756128] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.075 00:08:34.075 real 0m0.075s 00:08:34.075 user 0m0.051s 00:08:34.075 sys 0m0.024s 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.075 ************************************ 00:08:34.075 END TEST dd_invalid_oflag 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:34.075 ************************************ 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.075 ************************************ 00:08:34.075 START TEST dd_invalid_iflag 00:08:34.075 ************************************ 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.075 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:34.336 [2024-11-29 16:43:57.888905] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.336 00:08:34.336 real 0m0.079s 00:08:34.336 user 0m0.049s 00:08:34.336 sys 0m0.029s 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:34.336 ************************************ 00:08:34.336 END TEST dd_invalid_iflag 00:08:34.336 ************************************ 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.336 ************************************ 00:08:34.336 START TEST dd_unknown_flag 00:08:34.336 ************************************ 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.336 16:43:57 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:34.336 [2024-11-29 16:43:58.015706] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:34.336 [2024-11-29 16:43:58.016371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75725 ] 00:08:34.596 [2024-11-29 16:43:58.141519] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:34.596 [2024-11-29 16:43:58.175069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.597 [2024-11-29 16:43:58.201197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.597 [2024-11-29 16:43:58.235752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.597 [2024-11-29 16:43:58.255319] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:34.597 [2024-11-29 16:43:58.255576] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.597 [2024-11-29 16:43:58.255694] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:34.597 [2024-11-29 16:43:58.255753] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.597 [2024-11-29 16:43:58.256100] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:34.597 [2024-11-29 16:43:58.256261] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.597 [2024-11-29 16:43:58.256405] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:34.597 [2024-11-29 16:43:58.256469] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:34.597 [2024-11-29 16:43:58.325031] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:34.597 16:43:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:08:34.597 16:43:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.597 16:43:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:08:34.597 16:43:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:08:34.597 16:43:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:08:34.597 ************************************ 00:08:34.597 END TEST dd_unknown_flag 00:08:34.597 ************************************ 00:08:34.597 16:43:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.597 00:08:34.597 real 0m0.426s 00:08:34.597 user 0m0.202s 00:08:34.597 sys 0m0.131s 00:08:34.597 16:43:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.597 16:43:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.857 ************************************ 00:08:34.857 START TEST dd_invalid_json 00:08:34.857 ************************************ 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.857 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:34.857 [2024-11-29 16:43:58.501491] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:34.857 [2024-11-29 16:43:58.501586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75759 ] 00:08:34.857 [2024-11-29 16:43:58.627064] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:35.116 [2024-11-29 16:43:58.659182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.116 [2024-11-29 16:43:58.685015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.116 [2024-11-29 16:43:58.685108] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:35.116 [2024-11-29 16:43:58.685129] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:35.116 [2024-11-29 16:43:58.685141] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.116 [2024-11-29 16:43:58.685183] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:35.116 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:08:35.116 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.116 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:08:35.116 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:08:35.116 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:08:35.116 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.116 00:08:35.116 real 0m0.298s 00:08:35.116 user 0m0.128s 00:08:35.116 sys 0m0.068s 00:08:35.116 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.116 ************************************ 00:08:35.116 END TEST dd_invalid_json 00:08:35.116 ************************************ 00:08:35.116 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:35.116 16:43:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:35.117 ************************************ 00:08:35.117 START TEST dd_invalid_seek 00:08:35.117 ************************************ 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.117 16:43:58 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:35.117 { 00:08:35.117 "subsystems": [ 00:08:35.117 { 00:08:35.117 "subsystem": "bdev", 00:08:35.117 "config": [ 00:08:35.117 { 00:08:35.117 "params": { 00:08:35.117 "block_size": 512, 00:08:35.117 "num_blocks": 512, 00:08:35.117 "name": "malloc0" 00:08:35.117 }, 00:08:35.117 "method": "bdev_malloc_create" 00:08:35.117 }, 00:08:35.117 { 00:08:35.117 "params": { 00:08:35.117 "block_size": 512, 00:08:35.117 "num_blocks": 512, 00:08:35.117 "name": "malloc1" 00:08:35.117 }, 00:08:35.117 "method": "bdev_malloc_create" 00:08:35.117 }, 00:08:35.117 { 00:08:35.117 "method": "bdev_wait_for_examine" 00:08:35.117 } 00:08:35.117 ] 00:08:35.117 } 00:08:35.117 ] 00:08:35.117 } 00:08:35.117 [2024-11-29 16:43:58.855652] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:35.117 [2024-11-29 16:43:58.855772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75783 ] 00:08:35.376 [2024-11-29 16:43:58.981062] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:35.376 [2024-11-29 16:43:59.014539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.376 [2024-11-29 16:43:59.038767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.376 [2024-11-29 16:43:59.072544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.376 [2024-11-29 16:43:59.118028] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:35.376 [2024-11-29 16:43:59.118095] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.636 [2024-11-29 16:43:59.186857] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:08:35.636 ************************************ 00:08:35.636 END TEST dd_invalid_seek 00:08:35.636 ************************************ 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.636 00:08:35.636 real 0m0.449s 00:08:35.636 user 0m0.289s 00:08:35.636 sys 0m0.121s 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:35.636 ************************************ 00:08:35.636 START TEST dd_invalid_skip 00:08:35.636 ************************************ 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.636 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:35.636 [2024-11-29 16:43:59.371270] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:35.636 [2024-11-29 16:43:59.371460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75821 ] 00:08:35.636 { 00:08:35.636 "subsystems": [ 00:08:35.636 { 00:08:35.636 "subsystem": "bdev", 00:08:35.636 "config": [ 00:08:35.636 { 00:08:35.636 "params": { 00:08:35.636 "block_size": 512, 00:08:35.636 "num_blocks": 512, 00:08:35.636 "name": "malloc0" 00:08:35.636 }, 00:08:35.636 "method": "bdev_malloc_create" 00:08:35.636 }, 00:08:35.636 { 00:08:35.636 "params": { 00:08:35.636 "block_size": 512, 00:08:35.636 "num_blocks": 512, 00:08:35.636 "name": "malloc1" 00:08:35.636 }, 00:08:35.636 "method": "bdev_malloc_create" 00:08:35.636 }, 00:08:35.636 { 00:08:35.636 "method": "bdev_wait_for_examine" 00:08:35.636 } 00:08:35.636 ] 00:08:35.636 } 00:08:35.636 ] 00:08:35.636 } 00:08:35.896 [2024-11-29 16:43:59.496966] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:35.896 [2024-11-29 16:43:59.525763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.896 [2024-11-29 16:43:59.544390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.896 [2024-11-29 16:43:59.571183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.896 [2024-11-29 16:43:59.612295] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:35.896 [2024-11-29 16:43:59.612398] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.896 [2024-11-29 16:43:59.670929] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:36.155 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:08:36.155 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:36.155 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:08:36.155 ************************************ 00:08:36.155 END TEST dd_invalid_skip 00:08:36.155 ************************************ 00:08:36.155 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:36.156 00:08:36.156 real 0m0.436s 00:08:36.156 user 0m0.301s 00:08:36.156 sys 0m0.115s 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:36.156 ************************************ 00:08:36.156 START TEST dd_invalid_input_count 00:08:36.156 ************************************ 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:36.156 16:43:59 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:36.156 { 00:08:36.156 "subsystems": [ 00:08:36.156 { 00:08:36.156 "subsystem": "bdev", 00:08:36.156 "config": [ 00:08:36.156 { 00:08:36.156 "params": { 00:08:36.156 "block_size": 512, 00:08:36.156 "num_blocks": 512, 00:08:36.156 "name": "malloc0" 00:08:36.156 }, 00:08:36.156 "method": "bdev_malloc_create" 00:08:36.156 }, 00:08:36.156 { 00:08:36.156 "params": { 00:08:36.156 "block_size": 512, 00:08:36.156 "num_blocks": 512, 00:08:36.156 "name": "malloc1" 00:08:36.156 }, 00:08:36.156 "method": "bdev_malloc_create" 00:08:36.156 }, 00:08:36.156 { 00:08:36.156 "method": "bdev_wait_for_examine" 00:08:36.156 } 00:08:36.156 ] 00:08:36.156 } 00:08:36.156 ] 00:08:36.156 } 00:08:36.156 [2024-11-29 16:43:59.845668] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:36.156 [2024-11-29 16:43:59.845796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75850 ] 00:08:36.415 [2024-11-29 16:43:59.970630] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:36.415 [2024-11-29 16:43:59.996291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.415 [2024-11-29 16:44:00.016289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.415 [2024-11-29 16:44:00.045145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.415 [2024-11-29 16:44:00.086318] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:36.415 [2024-11-29 16:44:00.086413] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.415 [2024-11-29 16:44:00.144918] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:36.415 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:08:36.415 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:36.415 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:08:36.415 ************************************ 00:08:36.415 END TEST dd_invalid_input_count 00:08:36.415 ************************************ 00:08:36.415 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:36.415 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:08:36.415 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:36.415 00:08:36.415 real 0m0.409s 00:08:36.415 user 0m0.252s 00:08:36.415 sys 0m0.112s 00:08:36.415 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.415 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:36.675 ************************************ 00:08:36.675 START TEST dd_invalid_output_count 00:08:36.675 ************************************ 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:36.675 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:36.675 { 00:08:36.675 "subsystems": [ 00:08:36.675 { 00:08:36.675 "subsystem": "bdev", 00:08:36.675 "config": [ 00:08:36.675 { 00:08:36.675 "params": { 00:08:36.675 "block_size": 512, 00:08:36.675 "num_blocks": 512, 00:08:36.675 "name": "malloc0" 00:08:36.675 }, 00:08:36.675 "method": "bdev_malloc_create" 00:08:36.675 }, 00:08:36.675 { 00:08:36.675 "method": "bdev_wait_for_examine" 00:08:36.675 } 00:08:36.675 ] 00:08:36.675 } 00:08:36.675 ] 00:08:36.675 } 00:08:36.675 [2024-11-29 16:44:00.309798] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:36.675 [2024-11-29 16:44:00.309889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75878 ] 00:08:36.676 [2024-11-29 16:44:00.437889] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:36.934 [2024-11-29 16:44:00.467691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.934 [2024-11-29 16:44:00.490572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.934 [2024-11-29 16:44:00.522583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.934 [2024-11-29 16:44:00.559558] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:36.934 [2024-11-29 16:44:00.559620] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.934 [2024-11-29 16:44:00.628616] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:36.934 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:08:36.934 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:36.934 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:08:36.934 ************************************ 00:08:36.934 END TEST dd_invalid_output_count 00:08:36.934 ************************************ 00:08:36.934 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:36.934 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:08:36.934 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:36.934 00:08:36.934 real 0m0.437s 00:08:36.934 user 0m0.279s 00:08:36.934 sys 0m0.115s 00:08:36.934 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.934 16:44:00 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:37.193 ************************************ 00:08:37.193 START TEST dd_bs_not_multiple 00:08:37.193 ************************************ 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.193 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:37.194 16:44:00 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:37.194 { 00:08:37.194 "subsystems": [ 00:08:37.194 { 00:08:37.194 "subsystem": "bdev", 00:08:37.194 "config": [ 00:08:37.194 { 00:08:37.194 "params": { 00:08:37.194 "block_size": 512, 00:08:37.194 "num_blocks": 512, 00:08:37.194 "name": "malloc0" 00:08:37.194 }, 00:08:37.194 "method": "bdev_malloc_create" 00:08:37.194 }, 00:08:37.194 { 00:08:37.194 "params": { 00:08:37.194 "block_size": 512, 00:08:37.194 "num_blocks": 512, 00:08:37.194 "name": "malloc1" 00:08:37.194 }, 00:08:37.194 "method": "bdev_malloc_create" 00:08:37.194 }, 00:08:37.194 { 00:08:37.194 "method": "bdev_wait_for_examine" 00:08:37.194 } 00:08:37.194 ] 00:08:37.194 } 00:08:37.194 ] 00:08:37.194 } 00:08:37.194 [2024-11-29 16:44:00.806001] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:37.194 [2024-11-29 16:44:00.806277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75915 ] 00:08:37.194 [2024-11-29 16:44:00.932667] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:37.194 [2024-11-29 16:44:00.959991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.194 [2024-11-29 16:44:00.982455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.452 [2024-11-29 16:44:01.015588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.452 [2024-11-29 16:44:01.060286] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:37.452 [2024-11-29 16:44:01.060626] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:37.452 [2024-11-29 16:44:01.122037] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:37.452 16:44:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:08:37.452 ************************************ 00:08:37.452 END TEST dd_bs_not_multiple 00:08:37.452 ************************************ 00:08:37.452 16:44:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:37.452 16:44:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:08:37.452 16:44:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:08:37.452 16:44:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:08:37.452 16:44:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:37.452 00:08:37.452 real 0m0.438s 00:08:37.452 user 0m0.285s 00:08:37.452 sys 0m0.114s 00:08:37.452 16:44:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.452 16:44:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:37.452 ************************************ 00:08:37.452 END TEST spdk_dd_negative 00:08:37.452 ************************************ 00:08:37.452 00:08:37.452 real 0m5.078s 00:08:37.452 user 0m2.784s 00:08:37.452 sys 0m1.704s 00:08:37.452 16:44:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.452 16:44:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:37.711 ************************************ 00:08:37.711 END TEST spdk_dd 00:08:37.711 ************************************ 00:08:37.711 00:08:37.711 real 1m2.360s 00:08:37.711 user 0m39.287s 00:08:37.711 sys 0m26.613s 00:08:37.711 16:44:01 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.711 16:44:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:37.711 16:44:01 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:37.711 16:44:01 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:37.711 16:44:01 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:37.711 16:44:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:37.711 16:44:01 -- common/autotest_common.sh@10 -- # set +x 00:08:37.711 16:44:01 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:37.711 16:44:01 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:37.711 16:44:01 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:37.711 16:44:01 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:37.711 16:44:01 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:37.711 16:44:01 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:37.711 16:44:01 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:37.711 16:44:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.711 16:44:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.711 16:44:01 -- common/autotest_common.sh@10 -- # set +x 00:08:37.711 ************************************ 00:08:37.711 START TEST nvmf_tcp 00:08:37.711 ************************************ 00:08:37.711 16:44:01 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:37.711 * Looking for test storage... 00:08:37.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:37.711 16:44:01 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.711 16:44:01 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.711 16:44:01 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.970 16:44:01 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.970 16:44:01 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:37.970 16:44:01 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.970 16:44:01 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.970 --rc genhtml_branch_coverage=1 00:08:37.970 --rc genhtml_function_coverage=1 00:08:37.970 --rc genhtml_legend=1 00:08:37.970 --rc geninfo_all_blocks=1 00:08:37.970 --rc geninfo_unexecuted_blocks=1 00:08:37.970 00:08:37.970 ' 00:08:37.970 16:44:01 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.970 --rc genhtml_branch_coverage=1 00:08:37.970 --rc genhtml_function_coverage=1 00:08:37.970 --rc genhtml_legend=1 00:08:37.970 --rc geninfo_all_blocks=1 00:08:37.970 --rc geninfo_unexecuted_blocks=1 00:08:37.970 00:08:37.970 ' 00:08:37.970 16:44:01 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.970 --rc genhtml_branch_coverage=1 00:08:37.970 --rc genhtml_function_coverage=1 00:08:37.970 --rc genhtml_legend=1 00:08:37.970 --rc geninfo_all_blocks=1 00:08:37.970 --rc geninfo_unexecuted_blocks=1 00:08:37.970 00:08:37.970 ' 00:08:37.970 16:44:01 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.970 --rc genhtml_branch_coverage=1 00:08:37.970 --rc genhtml_function_coverage=1 00:08:37.970 --rc genhtml_legend=1 00:08:37.970 --rc geninfo_all_blocks=1 00:08:37.970 --rc geninfo_unexecuted_blocks=1 00:08:37.970 00:08:37.970 ' 00:08:37.970 16:44:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:37.970 16:44:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:37.970 16:44:01 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:37.970 16:44:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.970 16:44:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.970 16:44:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:37.970 ************************************ 00:08:37.970 START TEST nvmf_target_core 00:08:37.970 ************************************ 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:37.970 * Looking for test storage... 00:08:37.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.970 --rc genhtml_branch_coverage=1 00:08:37.970 --rc genhtml_function_coverage=1 00:08:37.970 --rc genhtml_legend=1 00:08:37.970 --rc geninfo_all_blocks=1 00:08:37.970 --rc geninfo_unexecuted_blocks=1 00:08:37.970 00:08:37.970 ' 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.970 --rc genhtml_branch_coverage=1 00:08:37.970 --rc genhtml_function_coverage=1 00:08:37.970 --rc genhtml_legend=1 00:08:37.970 --rc geninfo_all_blocks=1 00:08:37.970 --rc geninfo_unexecuted_blocks=1 00:08:37.970 00:08:37.970 ' 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.970 --rc genhtml_branch_coverage=1 00:08:37.970 --rc genhtml_function_coverage=1 00:08:37.970 --rc genhtml_legend=1 00:08:37.970 --rc geninfo_all_blocks=1 00:08:37.970 --rc geninfo_unexecuted_blocks=1 00:08:37.970 00:08:37.970 ' 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.970 --rc genhtml_branch_coverage=1 00:08:37.970 --rc genhtml_function_coverage=1 00:08:37.970 --rc genhtml_legend=1 00:08:37.970 --rc geninfo_all_blocks=1 00:08:37.970 --rc geninfo_unexecuted_blocks=1 00:08:37.970 00:08:37.970 ' 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.970 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.246 16:44:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.247 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.247 ************************************ 00:08:38.247 START TEST nvmf_host_management 00:08:38.247 ************************************ 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:38.247 * Looking for test storage... 00:08:38.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.247 --rc genhtml_branch_coverage=1 00:08:38.247 --rc genhtml_function_coverage=1 00:08:38.247 --rc genhtml_legend=1 00:08:38.247 --rc geninfo_all_blocks=1 00:08:38.247 --rc geninfo_unexecuted_blocks=1 00:08:38.247 00:08:38.247 ' 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.247 --rc genhtml_branch_coverage=1 00:08:38.247 --rc genhtml_function_coverage=1 00:08:38.247 --rc genhtml_legend=1 00:08:38.247 --rc geninfo_all_blocks=1 00:08:38.247 --rc geninfo_unexecuted_blocks=1 00:08:38.247 00:08:38.247 ' 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.247 --rc genhtml_branch_coverage=1 00:08:38.247 --rc genhtml_function_coverage=1 00:08:38.247 --rc genhtml_legend=1 00:08:38.247 --rc geninfo_all_blocks=1 00:08:38.247 --rc geninfo_unexecuted_blocks=1 00:08:38.247 00:08:38.247 ' 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.247 --rc genhtml_branch_coverage=1 00:08:38.247 --rc genhtml_function_coverage=1 00:08:38.247 --rc genhtml_legend=1 00:08:38.247 --rc geninfo_all_blocks=1 00:08:38.247 --rc geninfo_unexecuted_blocks=1 00:08:38.247 00:08:38.247 ' 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.247 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.248 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.248 16:44:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:38.248 Cannot find device "nvmf_init_br" 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:38.248 Cannot find device "nvmf_init_br2" 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:38.248 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:38.507 Cannot find device "nvmf_tgt_br" 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.507 Cannot find device "nvmf_tgt_br2" 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:38.507 Cannot find device "nvmf_init_br" 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:38.507 Cannot find device "nvmf_init_br2" 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:38.507 Cannot find device "nvmf_tgt_br" 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:38.507 Cannot find device "nvmf_tgt_br2" 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:38.507 Cannot find device "nvmf_br" 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:38.507 Cannot find device "nvmf_init_if" 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:38.507 Cannot find device "nvmf_init_if2" 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:38.507 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:38.766 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:38.766 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:38.767 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:38.767 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:08:38.767 00:08:38.767 --- 10.0.0.3 ping statistics --- 00:08:38.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.767 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:38.767 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:38.767 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:08:38.767 00:08:38.767 --- 10.0.0.4 ping statistics --- 00:08:38.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.767 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:38.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:38.767 00:08:38.767 --- 10.0.0.1 ping statistics --- 00:08:38.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.767 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:38.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:08:38.767 00:08:38.767 --- 10.0.0.2 ping statistics --- 00:08:38.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.767 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=76255 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 76255 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 76255 ']' 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.767 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.027 [2024-11-29 16:44:02.583749] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:39.027 [2024-11-29 16:44:02.583876] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.027 [2024-11-29 16:44:02.712225] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:39.027 [2024-11-29 16:44:02.744690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.027 [2024-11-29 16:44:02.771584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.027 [2024-11-29 16:44:02.771891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.027 [2024-11-29 16:44:02.772097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.027 [2024-11-29 16:44:02.772248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.027 [2024-11-29 16:44:02.772289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.027 [2024-11-29 16:44:02.773297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.027 [2024-11-29 16:44:02.773380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.027 [2024-11-29 16:44:02.773514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:39.027 [2024-11-29 16:44:02.773521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.027 [2024-11-29 16:44:02.807689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.286 [2024-11-29 16:44:02.908432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.286 Malloc0 00:08:39.286 [2024-11-29 16:44:02.974410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:39.286 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.287 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:39.287 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.287 16:44:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=76296 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 76296 /var/tmp/bdevperf.sock 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 76296 ']' 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.287 { 00:08:39.287 "params": { 00:08:39.287 "name": "Nvme$subsystem", 00:08:39.287 "trtype": "$TEST_TRANSPORT", 00:08:39.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.287 "adrfam": "ipv4", 00:08:39.287 "trsvcid": "$NVMF_PORT", 00:08:39.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.287 "hdgst": ${hdgst:-false}, 00:08:39.287 "ddgst": ${ddgst:-false} 00:08:39.287 }, 00:08:39.287 "method": "bdev_nvme_attach_controller" 00:08:39.287 } 00:08:39.287 EOF 00:08:39.287 )") 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:39.287 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.287 "params": { 00:08:39.287 "name": "Nvme0", 00:08:39.287 "trtype": "tcp", 00:08:39.287 "traddr": "10.0.0.3", 00:08:39.287 "adrfam": "ipv4", 00:08:39.287 "trsvcid": "4420", 00:08:39.287 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.287 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:39.287 "hdgst": false, 00:08:39.287 "ddgst": false 00:08:39.287 }, 00:08:39.287 "method": "bdev_nvme_attach_controller" 00:08:39.287 }' 00:08:39.546 [2024-11-29 16:44:03.086593] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:39.546 [2024-11-29 16:44:03.087267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76296 ] 00:08:39.546 [2024-11-29 16:44:03.217831] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:39.546 [2024-11-29 16:44:03.244790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.546 [2024-11-29 16:44:03.269282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.546 [2024-11-29 16:44:03.311182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.805 Running I/O for 10 seconds... 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:39.805 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.064 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.324 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.324 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.324 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.324 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.324 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.325 16:44:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:40.325 [2024-11-29 16:44:03.870495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.870988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.870998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.325 [2024-11-29 16:44:03.871389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.325 [2024-11-29 16:44:03.871400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.326 [2024-11-29 16:44:03.871944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.871958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x845e30 is same with the state(6) to be set 00:08:40.326 [2024-11-29 16:44:03.872122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.326 [2024-11-29 16:44:03.872141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.872153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.326 [2024-11-29 16:44:03.872163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.872173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.326 [2024-11-29 16:44:03.872182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.872192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.326 [2024-11-29 16:44:03.872201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.326 [2024-11-29 16:44:03.872210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62ebb0 is same with the state(6) to be set 00:08:40.326 [2024-11-29 16:44:03.873320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:40.326 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:40.326 00:08:40.326 Latency(us) 00:08:40.326 [2024-11-29T16:44:04.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.326 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:40.326 Job: Nvme0n1 ended in about 0.45 seconds with error 00:08:40.326 Verification LBA range: start 0x0 length 0x400 00:08:40.326 Nvme0n1 : 0.45 1409.60 88.10 140.96 0.00 39987.59 2174.60 36938.47 00:08:40.326 [2024-11-29T16:44:04.118Z] =================================================================================================================== 00:08:40.326 [2024-11-29T16:44:04.118Z] Total : 1409.60 88.10 140.96 0.00 39987.59 2174.60 36938.47 00:08:40.326 [2024-11-29 16:44:03.875342] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.326 [2024-11-29 16:44:03.875366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62ebb0 (9): Bad file descriptor 00:08:40.326 [2024-11-29 16:44:03.886790] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:41.263 16:44:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 76296 00:08:41.263 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (76296) - No such process 00:08:41.263 16:44:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:41.263 16:44:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:41.263 16:44:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:41.263 16:44:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:41.263 16:44:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:41.263 16:44:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:41.263 16:44:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:41.263 16:44:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:41.263 { 00:08:41.263 "params": { 00:08:41.263 "name": "Nvme$subsystem", 00:08:41.263 "trtype": "$TEST_TRANSPORT", 00:08:41.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.263 "adrfam": "ipv4", 00:08:41.263 "trsvcid": "$NVMF_PORT", 00:08:41.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.263 "hdgst": ${hdgst:-false}, 00:08:41.263 "ddgst": ${ddgst:-false} 00:08:41.263 }, 00:08:41.263 "method": "bdev_nvme_attach_controller" 00:08:41.263 } 00:08:41.263 EOF 00:08:41.263 )") 00:08:41.263 16:44:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:41.263 16:44:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:41.263 16:44:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:41.263 16:44:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.263 "params": { 00:08:41.263 "name": "Nvme0", 00:08:41.263 "trtype": "tcp", 00:08:41.263 "traddr": "10.0.0.3", 00:08:41.263 "adrfam": "ipv4", 00:08:41.263 "trsvcid": "4420", 00:08:41.263 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:41.263 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:41.263 "hdgst": false, 00:08:41.263 "ddgst": false 00:08:41.263 }, 00:08:41.263 "method": "bdev_nvme_attach_controller" 00:08:41.263 }' 00:08:41.263 [2024-11-29 16:44:04.934209] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:41.263 [2024-11-29 16:44:04.934486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76336 ] 00:08:41.521 [2024-11-29 16:44:05.061523] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.521 [2024-11-29 16:44:05.085272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.521 [2024-11-29 16:44:05.105154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.521 [2024-11-29 16:44:05.142003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.521 Running I/O for 1 seconds... 00:08:42.898 1600.00 IOPS, 100.00 MiB/s 00:08:42.898 Latency(us) 00:08:42.898 [2024-11-29T16:44:06.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.898 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:42.898 Verification LBA range: start 0x0 length 0x400 00:08:42.898 Nvme0n1 : 1.03 1619.46 101.22 0.00 0.00 38768.72 3559.80 36461.85 00:08:42.898 [2024-11-29T16:44:06.690Z] =================================================================================================================== 00:08:42.898 [2024-11-29T16:44:06.690Z] Total : 1619.46 101.22 0.00 0.00 38768.72 3559.80 36461.85 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.898 rmmod nvme_tcp 00:08:42.898 rmmod nvme_fabrics 00:08:42.898 rmmod nvme_keyring 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 76255 ']' 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 76255 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 76255 ']' 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 76255 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76255 00:08:42.898 killing process with pid 76255 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76255' 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 76255 00:08:42.898 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 76255 00:08:42.899 [2024-11-29 16:44:06.652707] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:42.899 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.899 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:42.899 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:42.899 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:42.899 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:42.899 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:42.899 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.158 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.419 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:43.419 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:43.419 00:08:43.419 real 0m5.171s 00:08:43.419 user 0m17.853s 00:08:43.419 sys 0m1.421s 00:08:43.419 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.419 16:44:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.419 ************************************ 00:08:43.419 END TEST nvmf_host_management 00:08:43.419 ************************************ 00:08:43.419 16:44:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:43.419 16:44:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:43.419 16:44:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.419 16:44:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:43.419 ************************************ 00:08:43.419 START TEST nvmf_lvol 00:08:43.419 ************************************ 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:43.419 * Looking for test storage... 00:08:43.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:43.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.419 --rc genhtml_branch_coverage=1 00:08:43.419 --rc genhtml_function_coverage=1 00:08:43.419 --rc genhtml_legend=1 00:08:43.419 --rc geninfo_all_blocks=1 00:08:43.419 --rc geninfo_unexecuted_blocks=1 00:08:43.419 00:08:43.419 ' 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:43.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.419 --rc genhtml_branch_coverage=1 00:08:43.419 --rc genhtml_function_coverage=1 00:08:43.419 --rc genhtml_legend=1 00:08:43.419 --rc geninfo_all_blocks=1 00:08:43.419 --rc geninfo_unexecuted_blocks=1 00:08:43.419 00:08:43.419 ' 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:43.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.419 --rc genhtml_branch_coverage=1 00:08:43.419 --rc genhtml_function_coverage=1 00:08:43.419 --rc genhtml_legend=1 00:08:43.419 --rc geninfo_all_blocks=1 00:08:43.419 --rc geninfo_unexecuted_blocks=1 00:08:43.419 00:08:43.419 ' 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:43.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.419 --rc genhtml_branch_coverage=1 00:08:43.419 --rc genhtml_function_coverage=1 00:08:43.419 --rc genhtml_legend=1 00:08:43.419 --rc geninfo_all_blocks=1 00:08:43.419 --rc geninfo_unexecuted_blocks=1 00:08:43.419 00:08:43.419 ' 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.419 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:43.680 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.680 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:43.681 Cannot find device "nvmf_init_br" 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:43.681 Cannot find device "nvmf_init_br2" 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:43.681 Cannot find device "nvmf_tgt_br" 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:43.681 Cannot find device "nvmf_tgt_br2" 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:43.681 Cannot find device "nvmf_init_br" 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:43.681 Cannot find device "nvmf_init_br2" 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:43.681 Cannot find device "nvmf_tgt_br" 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:43.681 Cannot find device "nvmf_tgt_br2" 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:43.681 Cannot find device "nvmf_br" 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:43.681 Cannot find device "nvmf_init_if" 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:43.681 Cannot find device "nvmf_init_if2" 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:43.681 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:43.941 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:43.941 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:08:43.941 00:08:43.941 --- 10.0.0.3 ping statistics --- 00:08:43.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.941 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:43.941 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:43.941 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:08:43.941 00:08:43.941 --- 10.0.0.4 ping statistics --- 00:08:43.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.941 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:43.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:08:43.941 00:08:43.941 --- 10.0.0.1 ping statistics --- 00:08:43.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.941 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:43.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:08:43.941 00:08:43.941 --- 10.0.0.2 ping statistics --- 00:08:43.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.941 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=76602 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 76602 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 76602 ']' 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.941 16:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:44.200 [2024-11-29 16:44:07.749284] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:08:44.200 [2024-11-29 16:44:07.749403] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.200 [2024-11-29 16:44:07.877113] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:44.200 [2024-11-29 16:44:07.908261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:44.200 [2024-11-29 16:44:07.931678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.200 [2024-11-29 16:44:07.931765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.200 [2024-11-29 16:44:07.931789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.200 [2024-11-29 16:44:07.931799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.200 [2024-11-29 16:44:07.931808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.200 [2024-11-29 16:44:07.932708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.200 [2024-11-29 16:44:07.932857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.200 [2024-11-29 16:44:07.932864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.200 [2024-11-29 16:44:07.967357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.135 16:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.135 16:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:45.135 16:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.135 16:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.135 16:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:45.135 16:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.135 16:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:45.395 [2024-11-29 16:44:09.052765] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.395 16:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:45.655 16:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:45.655 16:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:45.914 16:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:45.914 16:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:46.174 16:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:46.434 16:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=462746f1-b247-49c7-b34e-e01d6f44acaa 00:08:46.434 16:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 462746f1-b247-49c7-b34e-e01d6f44acaa lvol 20 00:08:46.693 16:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4d032606-b8e5-432c-81d1-38d6e45dd7b3 00:08:46.693 16:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:46.952 16:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4d032606-b8e5-432c-81d1-38d6e45dd7b3 00:08:47.519 16:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:47.519 [2024-11-29 16:44:11.239474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:47.519 16:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:47.778 16:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=76675 00:08:47.778 16:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:47.779 16:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:48.716 16:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 4d032606-b8e5-432c-81d1-38d6e45dd7b3 MY_SNAPSHOT 00:08:49.284 16:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=169b7e4a-b59b-40b8-aa39-9241ca2253cc 00:08:49.285 16:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 4d032606-b8e5-432c-81d1-38d6e45dd7b3 30 00:08:49.543 16:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 169b7e4a-b59b-40b8-aa39-9241ca2253cc MY_CLONE 00:08:49.803 16:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=980a4c80-7630-4081-b7b8-b11fa03423f1 00:08:49.803 16:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 980a4c80-7630-4081-b7b8-b11fa03423f1 00:08:50.372 16:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 76675 00:08:58.503 Initializing NVMe Controllers 00:08:58.503 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:58.503 Controller IO queue size 128, less than required. 00:08:58.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:58.503 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:58.503 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:58.503 Initialization complete. Launching workers. 00:08:58.503 ======================================================== 00:08:58.503 Latency(us) 00:08:58.503 Device Information : IOPS MiB/s Average min max 00:08:58.503 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10108.18 39.49 12672.63 2598.76 57209.16 00:08:58.503 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10175.28 39.75 12581.82 1673.99 107056.51 00:08:58.503 ======================================================== 00:08:58.503 Total : 20283.46 79.23 12627.07 1673.99 107056.51 00:08:58.503 00:08:58.503 16:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:58.503 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4d032606-b8e5-432c-81d1-38d6e45dd7b3 00:08:58.762 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 462746f1-b247-49c7-b34e-e01d6f44acaa 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.020 rmmod nvme_tcp 00:08:59.020 rmmod nvme_fabrics 00:08:59.020 rmmod nvme_keyring 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 76602 ']' 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 76602 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 76602 ']' 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 76602 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.020 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76602 00:08:59.280 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.280 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.280 killing process with pid 76602 00:08:59.280 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76602' 00:08:59.280 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 76602 00:08:59.280 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 76602 00:08:59.280 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:59.280 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:59.280 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:59.280 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:59.280 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:59.280 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:59.280 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:59.280 16:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.280 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:59.280 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:59.280 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:59.280 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:59.280 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:59.280 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:59.280 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:59.539 00:08:59.539 real 0m16.218s 00:08:59.539 user 1m6.482s 00:08:59.539 sys 0m4.264s 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.539 ************************************ 00:08:59.539 END TEST nvmf_lvol 00:08:59.539 ************************************ 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.539 ************************************ 00:08:59.539 START TEST nvmf_lvs_grow 00:08:59.539 ************************************ 00:08:59.539 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:59.798 * Looking for test storage... 00:08:59.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:59.798 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:59.798 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:59.798 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:59.798 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:59.798 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.798 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.798 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.798 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:59.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.799 --rc genhtml_branch_coverage=1 00:08:59.799 --rc genhtml_function_coverage=1 00:08:59.799 --rc genhtml_legend=1 00:08:59.799 --rc geninfo_all_blocks=1 00:08:59.799 --rc geninfo_unexecuted_blocks=1 00:08:59.799 00:08:59.799 ' 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:59.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.799 --rc genhtml_branch_coverage=1 00:08:59.799 --rc genhtml_function_coverage=1 00:08:59.799 --rc genhtml_legend=1 00:08:59.799 --rc geninfo_all_blocks=1 00:08:59.799 --rc geninfo_unexecuted_blocks=1 00:08:59.799 00:08:59.799 ' 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:59.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.799 --rc genhtml_branch_coverage=1 00:08:59.799 --rc genhtml_function_coverage=1 00:08:59.799 --rc genhtml_legend=1 00:08:59.799 --rc geninfo_all_blocks=1 00:08:59.799 --rc geninfo_unexecuted_blocks=1 00:08:59.799 00:08:59.799 ' 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:59.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.799 --rc genhtml_branch_coverage=1 00:08:59.799 --rc genhtml_function_coverage=1 00:08:59.799 --rc genhtml_legend=1 00:08:59.799 --rc geninfo_all_blocks=1 00:08:59.799 --rc geninfo_unexecuted_blocks=1 00:08:59.799 00:08:59.799 ' 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.799 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.799 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:59.800 Cannot find device "nvmf_init_br" 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:59.800 Cannot find device "nvmf_init_br2" 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:59.800 Cannot find device "nvmf_tgt_br" 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:59.800 Cannot find device "nvmf_tgt_br2" 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:59.800 Cannot find device "nvmf_init_br" 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:59.800 Cannot find device "nvmf_init_br2" 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:59.800 Cannot find device "nvmf_tgt_br" 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:59.800 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:00.059 Cannot find device "nvmf_tgt_br2" 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:00.059 Cannot find device "nvmf_br" 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:00.059 Cannot find device "nvmf_init_if" 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:00.059 Cannot find device "nvmf_init_if2" 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:00.059 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:00.059 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:00.059 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:00.319 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:00.319 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:09:00.319 00:09:00.319 --- 10.0.0.3 ping statistics --- 00:09:00.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.319 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:00.319 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:00.319 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:09:00.319 00:09:00.319 --- 10.0.0.4 ping statistics --- 00:09:00.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.319 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:00.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:09:00.319 00:09:00.319 --- 10.0.0.1 ping statistics --- 00:09:00.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.319 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:00.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:09:00.319 00:09:00.319 --- 10.0.0.2 ping statistics --- 00:09:00.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.319 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=77067 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 77067 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 77067 ']' 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.319 16:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.319 [2024-11-29 16:44:23.984821] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:00.319 [2024-11-29 16:44:23.984908] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.579 [2024-11-29 16:44:24.114743] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:00.579 [2024-11-29 16:44:24.147478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.579 [2024-11-29 16:44:24.171565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.579 [2024-11-29 16:44:24.171629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.579 [2024-11-29 16:44:24.171644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.579 [2024-11-29 16:44:24.171654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.579 [2024-11-29 16:44:24.171663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.579 [2024-11-29 16:44:24.172016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.579 [2024-11-29 16:44:24.207850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:00.579 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.579 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:00.579 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:00.579 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.579 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.579 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.579 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:00.837 [2024-11-29 16:44:24.597039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.837 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:00.837 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.837 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.837 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:01.097 ************************************ 00:09:01.097 START TEST lvs_grow_clean 00:09:01.097 ************************************ 00:09:01.097 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:01.097 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:01.097 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:01.097 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:01.097 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:01.097 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:01.097 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:01.097 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:01.097 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:01.097 16:44:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:01.356 16:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:01.356 16:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:01.614 16:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c1f66b53-a6d4-45ff-9034-d92120033588 00:09:01.614 16:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1f66b53-a6d4-45ff-9034-d92120033588 00:09:01.614 16:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:01.872 16:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:01.872 16:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:01.872 16:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c1f66b53-a6d4-45ff-9034-d92120033588 lvol 150 00:09:02.130 16:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f2786dcf-96d5-42a9-b237-3c52fbcf51f7 00:09:02.130 16:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:02.130 16:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:02.699 [2024-11-29 16:44:26.190551] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:02.699 [2024-11-29 16:44:26.190634] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:02.699 true 00:09:02.699 16:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1f66b53-a6d4-45ff-9034-d92120033588 00:09:02.699 16:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:02.958 16:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:02.958 16:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:03.217 16:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f2786dcf-96d5-42a9-b237-3c52fbcf51f7 00:09:03.476 16:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:03.735 [2024-11-29 16:44:27.395844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:03.735 16:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:03.994 16:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:03.994 16:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=77142 00:09:03.994 16:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.994 16:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 77142 /var/tmp/bdevperf.sock 00:09:03.994 16:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 77142 ']' 00:09:03.994 16:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.994 16:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.994 16:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.994 16:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.994 16:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:03.994 [2024-11-29 16:44:27.734628] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:03.994 [2024-11-29 16:44:27.735488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77142 ] 00:09:04.253 [2024-11-29 16:44:27.861128] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:04.253 [2024-11-29 16:44:27.896150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.253 [2024-11-29 16:44:27.920407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.253 [2024-11-29 16:44:27.954277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:05.189 16:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.190 16:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:05.190 16:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:05.190 Nvme0n1 00:09:05.190 16:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:05.448 [ 00:09:05.448 { 00:09:05.448 "name": "Nvme0n1", 00:09:05.448 "aliases": [ 00:09:05.448 "f2786dcf-96d5-42a9-b237-3c52fbcf51f7" 00:09:05.448 ], 00:09:05.448 "product_name": "NVMe disk", 00:09:05.448 "block_size": 4096, 00:09:05.448 "num_blocks": 38912, 00:09:05.448 "uuid": "f2786dcf-96d5-42a9-b237-3c52fbcf51f7", 00:09:05.448 "numa_id": -1, 00:09:05.448 "assigned_rate_limits": { 00:09:05.448 "rw_ios_per_sec": 0, 00:09:05.448 "rw_mbytes_per_sec": 0, 00:09:05.448 "r_mbytes_per_sec": 0, 00:09:05.448 "w_mbytes_per_sec": 0 00:09:05.448 }, 00:09:05.448 "claimed": false, 00:09:05.448 "zoned": false, 00:09:05.448 "supported_io_types": { 00:09:05.448 "read": true, 00:09:05.449 "write": true, 00:09:05.449 "unmap": true, 00:09:05.449 "flush": true, 00:09:05.449 "reset": true, 00:09:05.449 "nvme_admin": true, 00:09:05.449 "nvme_io": true, 00:09:05.449 "nvme_io_md": false, 00:09:05.449 "write_zeroes": true, 00:09:05.449 "zcopy": false, 00:09:05.449 "get_zone_info": false, 00:09:05.449 "zone_management": false, 00:09:05.449 "zone_append": false, 00:09:05.449 "compare": true, 00:09:05.449 "compare_and_write": true, 00:09:05.449 "abort": true, 00:09:05.449 "seek_hole": false, 00:09:05.449 "seek_data": false, 00:09:05.449 "copy": true, 00:09:05.449 "nvme_iov_md": false 00:09:05.449 }, 00:09:05.449 "memory_domains": [ 00:09:05.449 { 00:09:05.449 "dma_device_id": "system", 00:09:05.449 "dma_device_type": 1 00:09:05.449 } 00:09:05.449 ], 00:09:05.449 "driver_specific": { 00:09:05.449 "nvme": [ 00:09:05.449 { 00:09:05.449 "trid": { 00:09:05.449 "trtype": "TCP", 00:09:05.449 "adrfam": "IPv4", 00:09:05.449 "traddr": "10.0.0.3", 00:09:05.449 "trsvcid": "4420", 00:09:05.449 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:05.449 }, 00:09:05.449 "ctrlr_data": { 00:09:05.449 "cntlid": 1, 00:09:05.449 "vendor_id": "0x8086", 00:09:05.449 "model_number": "SPDK bdev Controller", 00:09:05.449 "serial_number": "SPDK0", 00:09:05.449 "firmware_revision": "25.01", 00:09:05.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:05.449 "oacs": { 00:09:05.449 "security": 0, 00:09:05.449 "format": 0, 00:09:05.449 "firmware": 0, 00:09:05.449 "ns_manage": 0 00:09:05.449 }, 00:09:05.449 "multi_ctrlr": true, 00:09:05.449 "ana_reporting": false 00:09:05.449 }, 00:09:05.449 "vs": { 00:09:05.449 "nvme_version": "1.3" 00:09:05.449 }, 00:09:05.449 "ns_data": { 00:09:05.449 "id": 1, 00:09:05.449 "can_share": true 00:09:05.449 } 00:09:05.449 } 00:09:05.449 ], 00:09:05.449 "mp_policy": "active_passive" 00:09:05.449 } 00:09:05.449 } 00:09:05.449 ] 00:09:05.449 16:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=77171 00:09:05.449 16:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:05.449 16:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:05.708 Running I/O for 10 seconds... 00:09:06.645 Latency(us) 00:09:06.645 [2024-11-29T16:44:30.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.645 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:06.645 [2024-11-29T16:44:30.437Z] =================================================================================================================== 00:09:06.645 [2024-11-29T16:44:30.437Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:06.645 00:09:07.604 16:44:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c1f66b53-a6d4-45ff-9034-d92120033588 00:09:07.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.604 Nvme0n1 : 2.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:07.604 [2024-11-29T16:44:31.396Z] =================================================================================================================== 00:09:07.604 [2024-11-29T16:44:31.396Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:07.604 00:09:07.863 true 00:09:07.863 16:44:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1f66b53-a6d4-45ff-9034-d92120033588 00:09:07.863 16:44:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:08.123 16:44:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:08.123 16:44:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:08.123 16:44:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 77171 00:09:08.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.692 Nvme0n1 : 3.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:08.692 [2024-11-29T16:44:32.484Z] =================================================================================================================== 00:09:08.692 [2024-11-29T16:44:32.484Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:08.692 00:09:09.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.628 Nvme0n1 : 4.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:09.628 [2024-11-29T16:44:33.420Z] =================================================================================================================== 00:09:09.628 [2024-11-29T16:44:33.420Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:09.628 00:09:11.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.009 Nvme0n1 : 5.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:11.009 [2024-11-29T16:44:34.801Z] =================================================================================================================== 00:09:11.009 [2024-11-29T16:44:34.801Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:11.009 00:09:11.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.944 Nvme0n1 : 6.00 6709.83 26.21 0.00 0.00 0.00 0.00 0.00 00:09:11.944 [2024-11-29T16:44:35.736Z] =================================================================================================================== 00:09:11.944 [2024-11-29T16:44:35.736Z] Total : 6709.83 26.21 0.00 0.00 0.00 0.00 0.00 00:09:11.944 00:09:12.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.881 Nvme0n1 : 7.00 6694.71 26.15 0.00 0.00 0.00 0.00 0.00 00:09:12.881 [2024-11-29T16:44:36.673Z] =================================================================================================================== 00:09:12.881 [2024-11-29T16:44:36.673Z] Total : 6694.71 26.15 0.00 0.00 0.00 0.00 0.00 00:09:12.881 00:09:13.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.818 Nvme0n1 : 8.00 6556.38 25.61 0.00 0.00 0.00 0.00 0.00 00:09:13.818 [2024-11-29T16:44:37.610Z] =================================================================================================================== 00:09:13.818 [2024-11-29T16:44:37.610Z] Total : 6556.38 25.61 0.00 0.00 0.00 0.00 0.00 00:09:13.818 00:09:14.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.756 Nvme0n1 : 9.00 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:09:14.756 [2024-11-29T16:44:38.548Z] =================================================================================================================== 00:09:14.756 [2024-11-29T16:44:38.548Z] Total : 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:09:14.756 00:09:15.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.693 Nvme0n1 : 10.00 6515.10 25.45 0.00 0.00 0.00 0.00 0.00 00:09:15.693 [2024-11-29T16:44:39.485Z] =================================================================================================================== 00:09:15.693 [2024-11-29T16:44:39.485Z] Total : 6515.10 25.45 0.00 0.00 0.00 0.00 0.00 00:09:15.693 00:09:15.693 00:09:15.693 Latency(us) 00:09:15.693 [2024-11-29T16:44:39.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.693 Nvme0n1 : 10.02 6512.44 25.44 0.00 0.00 19647.35 16681.89 162052.65 00:09:15.693 [2024-11-29T16:44:39.485Z] =================================================================================================================== 00:09:15.693 [2024-11-29T16:44:39.485Z] Total : 6512.44 25.44 0.00 0.00 19647.35 16681.89 162052.65 00:09:15.693 { 00:09:15.693 "results": [ 00:09:15.693 { 00:09:15.693 "job": "Nvme0n1", 00:09:15.693 "core_mask": "0x2", 00:09:15.693 "workload": "randwrite", 00:09:15.693 "status": "finished", 00:09:15.693 "queue_depth": 128, 00:09:15.693 "io_size": 4096, 00:09:15.693 "runtime": 10.02374, 00:09:15.693 "iops": 6512.439468701303, 00:09:15.693 "mibps": 25.439216674614464, 00:09:15.693 "io_failed": 0, 00:09:15.693 "io_timeout": 0, 00:09:15.693 "avg_latency_us": 19647.34907024255, 00:09:15.693 "min_latency_us": 16681.890909090907, 00:09:15.693 "max_latency_us": 162052.65454545454 00:09:15.693 } 00:09:15.693 ], 00:09:15.693 "core_count": 1 00:09:15.693 } 00:09:15.693 16:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 77142 00:09:15.694 16:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 77142 ']' 00:09:15.694 16:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 77142 00:09:15.694 16:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:15.694 16:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.694 16:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77142 00:09:15.694 killing process with pid 77142 00:09:15.694 Received shutdown signal, test time was about 10.000000 seconds 00:09:15.694 00:09:15.694 Latency(us) 00:09:15.694 [2024-11-29T16:44:39.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.694 [2024-11-29T16:44:39.486Z] =================================================================================================================== 00:09:15.694 [2024-11-29T16:44:39.486Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:15.694 16:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:15.694 16:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:15.694 16:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77142' 00:09:15.694 16:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 77142 00:09:15.694 16:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 77142 00:09:15.953 16:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:16.211 16:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:16.469 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1f66b53-a6d4-45ff-9034-d92120033588 00:09:16.470 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:16.728 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:16.728 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:16.728 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:16.987 [2024-11-29 16:44:40.704726] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:16.987 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1f66b53-a6d4-45ff-9034-d92120033588 00:09:16.987 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:16.987 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1f66b53-a6d4-45ff-9034-d92120033588 00:09:16.987 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.987 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.987 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.987 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.987 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.987 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.987 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.987 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:16.987 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1f66b53-a6d4-45ff-9034-d92120033588 00:09:17.246 request: 00:09:17.246 { 00:09:17.246 "uuid": "c1f66b53-a6d4-45ff-9034-d92120033588", 00:09:17.246 "method": "bdev_lvol_get_lvstores", 00:09:17.246 "req_id": 1 00:09:17.246 } 00:09:17.246 Got JSON-RPC error response 00:09:17.246 response: 00:09:17.246 { 00:09:17.246 "code": -19, 00:09:17.246 "message": "No such device" 00:09:17.246 } 00:09:17.246 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:17.246 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:17.246 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:17.246 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:17.246 16:44:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:17.505 aio_bdev 00:09:17.505 16:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f2786dcf-96d5-42a9-b237-3c52fbcf51f7 00:09:17.505 16:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f2786dcf-96d5-42a9-b237-3c52fbcf51f7 00:09:17.505 16:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.505 16:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:17.505 16:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.505 16:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.505 16:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:17.763 16:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f2786dcf-96d5-42a9-b237-3c52fbcf51f7 -t 2000 00:09:18.021 [ 00:09:18.021 { 00:09:18.021 "name": "f2786dcf-96d5-42a9-b237-3c52fbcf51f7", 00:09:18.021 "aliases": [ 00:09:18.021 "lvs/lvol" 00:09:18.021 ], 00:09:18.021 "product_name": "Logical Volume", 00:09:18.021 "block_size": 4096, 00:09:18.021 "num_blocks": 38912, 00:09:18.021 "uuid": "f2786dcf-96d5-42a9-b237-3c52fbcf51f7", 00:09:18.021 "assigned_rate_limits": { 00:09:18.021 "rw_ios_per_sec": 0, 00:09:18.021 "rw_mbytes_per_sec": 0, 00:09:18.021 "r_mbytes_per_sec": 0, 00:09:18.021 "w_mbytes_per_sec": 0 00:09:18.021 }, 00:09:18.021 "claimed": false, 00:09:18.021 "zoned": false, 00:09:18.021 "supported_io_types": { 00:09:18.021 "read": true, 00:09:18.021 "write": true, 00:09:18.021 "unmap": true, 00:09:18.021 "flush": false, 00:09:18.021 "reset": true, 00:09:18.021 "nvme_admin": false, 00:09:18.021 "nvme_io": false, 00:09:18.021 "nvme_io_md": false, 00:09:18.021 "write_zeroes": true, 00:09:18.021 "zcopy": false, 00:09:18.021 "get_zone_info": false, 00:09:18.021 "zone_management": false, 00:09:18.021 "zone_append": false, 00:09:18.021 "compare": false, 00:09:18.021 "compare_and_write": false, 00:09:18.021 "abort": false, 00:09:18.021 "seek_hole": true, 00:09:18.021 "seek_data": true, 00:09:18.021 "copy": false, 00:09:18.021 "nvme_iov_md": false 00:09:18.021 }, 00:09:18.021 "driver_specific": { 00:09:18.021 "lvol": { 00:09:18.021 "lvol_store_uuid": "c1f66b53-a6d4-45ff-9034-d92120033588", 00:09:18.021 "base_bdev": "aio_bdev", 00:09:18.021 "thin_provision": false, 00:09:18.022 "num_allocated_clusters": 38, 00:09:18.022 "snapshot": false, 00:09:18.022 "clone": false, 00:09:18.022 "esnap_clone": false 00:09:18.022 } 00:09:18.022 } 00:09:18.022 } 00:09:18.022 ] 00:09:18.022 16:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:18.022 16:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1f66b53-a6d4-45ff-9034-d92120033588 00:09:18.022 16:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:18.280 16:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:18.280 16:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1f66b53-a6d4-45ff-9034-d92120033588 00:09:18.280 16:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:18.538 16:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:18.538 16:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f2786dcf-96d5-42a9-b237-3c52fbcf51f7 00:09:18.797 16:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c1f66b53-a6d4-45ff-9034-d92120033588 00:09:19.055 16:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:19.313 16:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:19.571 ************************************ 00:09:19.572 END TEST lvs_grow_clean 00:09:19.572 ************************************ 00:09:19.572 00:09:19.572 real 0m18.702s 00:09:19.572 user 0m17.839s 00:09:19.572 sys 0m2.401s 00:09:19.572 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.572 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:19.831 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:19.831 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:19.831 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.831 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:19.831 ************************************ 00:09:19.831 START TEST lvs_grow_dirty 00:09:19.831 ************************************ 00:09:19.831 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:19.831 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:19.831 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:19.831 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:19.831 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:19.831 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:19.831 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:19.831 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:19.831 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:19.831 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:20.090 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:20.090 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:20.348 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5401fe96-9561-40af-93b0-c542a52c6ecf 00:09:20.348 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:20.348 16:44:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5401fe96-9561-40af-93b0-c542a52c6ecf 00:09:20.607 16:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:20.607 16:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:20.607 16:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5401fe96-9561-40af-93b0-c542a52c6ecf lvol 150 00:09:20.866 16:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=27b2c475-c37f-4842-bd05-133e7c3e153b 00:09:20.866 16:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:20.866 16:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:21.125 [2024-11-29 16:44:44.773969] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:21.125 [2024-11-29 16:44:44.774071] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:21.125 true 00:09:21.125 16:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5401fe96-9561-40af-93b0-c542a52c6ecf 00:09:21.125 16:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:21.384 16:44:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:21.384 16:44:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:21.643 16:44:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 27b2c475-c37f-4842-bd05-133e7c3e153b 00:09:21.902 16:44:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:22.160 [2024-11-29 16:44:45.770486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:22.160 16:44:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:22.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:22.419 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=77419 00:09:22.419 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:22.419 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:22.420 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 77419 /var/tmp/bdevperf.sock 00:09:22.420 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 77419 ']' 00:09:22.420 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:22.420 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.420 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:22.420 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.420 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:22.420 [2024-11-29 16:44:46.061439] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:22.420 [2024-11-29 16:44:46.061773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77419 ] 00:09:22.420 [2024-11-29 16:44:46.188441] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:22.743 [2024-11-29 16:44:46.222604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.743 [2024-11-29 16:44:46.248063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.743 [2024-11-29 16:44:46.284286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.743 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.743 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:22.743 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:23.001 Nvme0n1 00:09:23.001 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:23.261 [ 00:09:23.261 { 00:09:23.261 "name": "Nvme0n1", 00:09:23.261 "aliases": [ 00:09:23.261 "27b2c475-c37f-4842-bd05-133e7c3e153b" 00:09:23.261 ], 00:09:23.261 "product_name": "NVMe disk", 00:09:23.261 "block_size": 4096, 00:09:23.261 "num_blocks": 38912, 00:09:23.261 "uuid": "27b2c475-c37f-4842-bd05-133e7c3e153b", 00:09:23.261 "numa_id": -1, 00:09:23.261 "assigned_rate_limits": { 00:09:23.261 "rw_ios_per_sec": 0, 00:09:23.261 "rw_mbytes_per_sec": 0, 00:09:23.261 "r_mbytes_per_sec": 0, 00:09:23.261 "w_mbytes_per_sec": 0 00:09:23.261 }, 00:09:23.261 "claimed": false, 00:09:23.261 "zoned": false, 00:09:23.261 "supported_io_types": { 00:09:23.261 "read": true, 00:09:23.261 "write": true, 00:09:23.261 "unmap": true, 00:09:23.261 "flush": true, 00:09:23.261 "reset": true, 00:09:23.261 "nvme_admin": true, 00:09:23.261 "nvme_io": true, 00:09:23.261 "nvme_io_md": false, 00:09:23.261 "write_zeroes": true, 00:09:23.261 "zcopy": false, 00:09:23.261 "get_zone_info": false, 00:09:23.261 "zone_management": false, 00:09:23.261 "zone_append": false, 00:09:23.261 "compare": true, 00:09:23.261 "compare_and_write": true, 00:09:23.261 "abort": true, 00:09:23.261 "seek_hole": false, 00:09:23.261 "seek_data": false, 00:09:23.261 "copy": true, 00:09:23.261 "nvme_iov_md": false 00:09:23.261 }, 00:09:23.261 "memory_domains": [ 00:09:23.261 { 00:09:23.261 "dma_device_id": "system", 00:09:23.261 "dma_device_type": 1 00:09:23.261 } 00:09:23.261 ], 00:09:23.261 "driver_specific": { 00:09:23.261 "nvme": [ 00:09:23.261 { 00:09:23.261 "trid": { 00:09:23.261 "trtype": "TCP", 00:09:23.261 "adrfam": "IPv4", 00:09:23.261 "traddr": "10.0.0.3", 00:09:23.261 "trsvcid": "4420", 00:09:23.261 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:23.261 }, 00:09:23.261 "ctrlr_data": { 00:09:23.261 "cntlid": 1, 00:09:23.261 "vendor_id": "0x8086", 00:09:23.261 "model_number": "SPDK bdev Controller", 00:09:23.261 "serial_number": "SPDK0", 00:09:23.261 "firmware_revision": "25.01", 00:09:23.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:23.261 "oacs": { 00:09:23.261 "security": 0, 00:09:23.261 "format": 0, 00:09:23.261 "firmware": 0, 00:09:23.261 "ns_manage": 0 00:09:23.261 }, 00:09:23.261 "multi_ctrlr": true, 00:09:23.261 "ana_reporting": false 00:09:23.261 }, 00:09:23.261 "vs": { 00:09:23.261 "nvme_version": "1.3" 00:09:23.261 }, 00:09:23.261 "ns_data": { 00:09:23.261 "id": 1, 00:09:23.261 "can_share": true 00:09:23.261 } 00:09:23.261 } 00:09:23.261 ], 00:09:23.261 "mp_policy": "active_passive" 00:09:23.261 } 00:09:23.261 } 00:09:23.261 ] 00:09:23.261 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=77435 00:09:23.261 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:23.261 16:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:23.261 Running I/O for 10 seconds... 00:09:24.638 Latency(us) 00:09:24.638 [2024-11-29T16:44:48.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.638 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:24.638 [2024-11-29T16:44:48.430Z] =================================================================================================================== 00:09:24.638 [2024-11-29T16:44:48.430Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:24.638 00:09:25.235 16:44:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5401fe96-9561-40af-93b0-c542a52c6ecf 00:09:25.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.235 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:25.235 [2024-11-29T16:44:49.027Z] =================================================================================================================== 00:09:25.235 [2024-11-29T16:44:49.027Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:25.235 00:09:25.494 true 00:09:25.494 16:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5401fe96-9561-40af-93b0-c542a52c6ecf 00:09:25.494 16:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:26.062 16:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:26.062 16:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:26.062 16:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 77435 00:09:26.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.320 Nvme0n1 : 3.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:26.320 [2024-11-29T16:44:50.112Z] =================================================================================================================== 00:09:26.320 [2024-11-29T16:44:50.112Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:26.320 00:09:27.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.255 Nvme0n1 : 4.00 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:09:27.255 [2024-11-29T16:44:51.047Z] =================================================================================================================== 00:09:27.255 [2024-11-29T16:44:51.047Z] Total : 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:09:27.255 00:09:28.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.630 Nvme0n1 : 5.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:28.630 [2024-11-29T16:44:52.422Z] =================================================================================================================== 00:09:28.630 [2024-11-29T16:44:52.422Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:28.630 00:09:29.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.567 Nvme0n1 : 6.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:29.567 [2024-11-29T16:44:53.359Z] =================================================================================================================== 00:09:29.567 [2024-11-29T16:44:53.359Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:29.567 00:09:30.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.502 Nvme0n1 : 7.00 6622.14 25.87 0.00 0.00 0.00 0.00 0.00 00:09:30.502 [2024-11-29T16:44:54.294Z] =================================================================================================================== 00:09:30.502 [2024-11-29T16:44:54.294Z] Total : 6622.14 25.87 0.00 0.00 0.00 0.00 0.00 00:09:30.502 00:09:31.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.439 Nvme0n1 : 8.00 6519.38 25.47 0.00 0.00 0.00 0.00 0.00 00:09:31.439 [2024-11-29T16:44:55.231Z] =================================================================================================================== 00:09:31.439 [2024-11-29T16:44:55.231Z] Total : 6519.38 25.47 0.00 0.00 0.00 0.00 0.00 00:09:31.439 00:09:32.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.375 Nvme0n1 : 9.00 6486.44 25.34 0.00 0.00 0.00 0.00 0.00 00:09:32.375 [2024-11-29T16:44:56.167Z] =================================================================================================================== 00:09:32.375 [2024-11-29T16:44:56.167Z] Total : 6486.44 25.34 0.00 0.00 0.00 0.00 0.00 00:09:32.375 00:09:33.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.311 Nvme0n1 : 10.00 6485.50 25.33 0.00 0.00 0.00 0.00 0.00 00:09:33.311 [2024-11-29T16:44:57.103Z] =================================================================================================================== 00:09:33.311 [2024-11-29T16:44:57.103Z] Total : 6485.50 25.33 0.00 0.00 0.00 0.00 0.00 00:09:33.311 00:09:33.311 00:09:33.311 Latency(us) 00:09:33.311 [2024-11-29T16:44:57.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.311 Nvme0n1 : 10.00 6483.39 25.33 0.00 0.00 19733.74 14417.92 192556.68 00:09:33.311 [2024-11-29T16:44:57.103Z] =================================================================================================================== 00:09:33.311 [2024-11-29T16:44:57.103Z] Total : 6483.39 25.33 0.00 0.00 19733.74 14417.92 192556.68 00:09:33.311 { 00:09:33.311 "results": [ 00:09:33.311 { 00:09:33.311 "job": "Nvme0n1", 00:09:33.311 "core_mask": "0x2", 00:09:33.311 "workload": "randwrite", 00:09:33.311 "status": "finished", 00:09:33.312 "queue_depth": 128, 00:09:33.312 "io_size": 4096, 00:09:33.312 "runtime": 10.003405, 00:09:33.312 "iops": 6483.392404886136, 00:09:33.312 "mibps": 25.32575158158647, 00:09:33.312 "io_failed": 0, 00:09:33.312 "io_timeout": 0, 00:09:33.312 "avg_latency_us": 19733.740005606825, 00:09:33.312 "min_latency_us": 14417.92, 00:09:33.312 "max_latency_us": 192556.68363636362 00:09:33.312 } 00:09:33.312 ], 00:09:33.312 "core_count": 1 00:09:33.312 } 00:09:33.312 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 77419 00:09:33.312 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 77419 ']' 00:09:33.312 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 77419 00:09:33.312 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:33.312 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.312 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77419 00:09:33.312 killing process with pid 77419 00:09:33.312 Received shutdown signal, test time was about 10.000000 seconds 00:09:33.312 00:09:33.312 Latency(us) 00:09:33.312 [2024-11-29T16:44:57.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.312 [2024-11-29T16:44:57.104Z] =================================================================================================================== 00:09:33.312 [2024-11-29T16:44:57.104Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:33.312 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:33.312 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:33.312 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77419' 00:09:33.312 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 77419 00:09:33.312 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 77419 00:09:33.570 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:33.829 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:34.088 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5401fe96-9561-40af-93b0-c542a52c6ecf 00:09:34.088 16:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 77067 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 77067 00:09:34.347 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 77067 Killed "${NVMF_APP[@]}" "$@" 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:34.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=77568 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 77568 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 77568 ']' 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.347 16:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:34.347 [2024-11-29 16:44:58.110045] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:34.347 [2024-11-29 16:44:58.110395] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.606 [2024-11-29 16:44:58.239845] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:34.606 [2024-11-29 16:44:58.265070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.606 [2024-11-29 16:44:58.283306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.606 [2024-11-29 16:44:58.283642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.606 [2024-11-29 16:44:58.283828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.606 [2024-11-29 16:44:58.283940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.606 [2024-11-29 16:44:58.284034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.606 [2024-11-29 16:44:58.284317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.606 [2024-11-29 16:44:58.311718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.555 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.555 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:35.555 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:35.555 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.555 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:35.555 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.555 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:35.826 [2024-11-29 16:44:59.358220] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:35.826 [2024-11-29 16:44:59.358664] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:35.826 [2024-11-29 16:44:59.358971] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:35.826 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:35.826 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 27b2c475-c37f-4842-bd05-133e7c3e153b 00:09:35.826 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=27b2c475-c37f-4842-bd05-133e7c3e153b 00:09:35.827 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.827 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:35.827 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.827 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.827 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:36.086 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27b2c475-c37f-4842-bd05-133e7c3e153b -t 2000 00:09:36.344 [ 00:09:36.344 { 00:09:36.344 "name": "27b2c475-c37f-4842-bd05-133e7c3e153b", 00:09:36.344 "aliases": [ 00:09:36.344 "lvs/lvol" 00:09:36.344 ], 00:09:36.344 "product_name": "Logical Volume", 00:09:36.344 "block_size": 4096, 00:09:36.344 "num_blocks": 38912, 00:09:36.344 "uuid": "27b2c475-c37f-4842-bd05-133e7c3e153b", 00:09:36.344 "assigned_rate_limits": { 00:09:36.344 "rw_ios_per_sec": 0, 00:09:36.344 "rw_mbytes_per_sec": 0, 00:09:36.344 "r_mbytes_per_sec": 0, 00:09:36.344 "w_mbytes_per_sec": 0 00:09:36.344 }, 00:09:36.344 "claimed": false, 00:09:36.344 "zoned": false, 00:09:36.344 "supported_io_types": { 00:09:36.344 "read": true, 00:09:36.344 "write": true, 00:09:36.345 "unmap": true, 00:09:36.345 "flush": false, 00:09:36.345 "reset": true, 00:09:36.345 "nvme_admin": false, 00:09:36.345 "nvme_io": false, 00:09:36.345 "nvme_io_md": false, 00:09:36.345 "write_zeroes": true, 00:09:36.345 "zcopy": false, 00:09:36.345 "get_zone_info": false, 00:09:36.345 "zone_management": false, 00:09:36.345 "zone_append": false, 00:09:36.345 "compare": false, 00:09:36.345 "compare_and_write": false, 00:09:36.345 "abort": false, 00:09:36.345 "seek_hole": true, 00:09:36.345 "seek_data": true, 00:09:36.345 "copy": false, 00:09:36.345 "nvme_iov_md": false 00:09:36.345 }, 00:09:36.345 "driver_specific": { 00:09:36.345 "lvol": { 00:09:36.345 "lvol_store_uuid": "5401fe96-9561-40af-93b0-c542a52c6ecf", 00:09:36.345 "base_bdev": "aio_bdev", 00:09:36.345 "thin_provision": false, 00:09:36.345 "num_allocated_clusters": 38, 00:09:36.345 "snapshot": false, 00:09:36.345 "clone": false, 00:09:36.345 "esnap_clone": false 00:09:36.345 } 00:09:36.345 } 00:09:36.345 } 00:09:36.345 ] 00:09:36.345 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:36.345 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:36.345 16:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5401fe96-9561-40af-93b0-c542a52c6ecf 00:09:36.604 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:36.604 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5401fe96-9561-40af-93b0-c542a52c6ecf 00:09:36.604 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:36.863 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:36.863 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:37.122 [2024-11-29 16:45:00.796333] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:37.122 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5401fe96-9561-40af-93b0-c542a52c6ecf 00:09:37.122 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:37.122 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5401fe96-9561-40af-93b0-c542a52c6ecf 00:09:37.122 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.122 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.122 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.122 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.122 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.122 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.122 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.122 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:37.122 16:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5401fe96-9561-40af-93b0-c542a52c6ecf 00:09:37.382 request: 00:09:37.382 { 00:09:37.382 "uuid": "5401fe96-9561-40af-93b0-c542a52c6ecf", 00:09:37.382 "method": "bdev_lvol_get_lvstores", 00:09:37.382 "req_id": 1 00:09:37.382 } 00:09:37.382 Got JSON-RPC error response 00:09:37.382 response: 00:09:37.382 { 00:09:37.382 "code": -19, 00:09:37.382 "message": "No such device" 00:09:37.382 } 00:09:37.382 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:37.382 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.382 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:37.382 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.382 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:37.640 aio_bdev 00:09:37.640 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 27b2c475-c37f-4842-bd05-133e7c3e153b 00:09:37.640 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=27b2c475-c37f-4842-bd05-133e7c3e153b 00:09:37.640 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.640 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:37.640 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.640 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.640 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:37.899 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27b2c475-c37f-4842-bd05-133e7c3e153b -t 2000 00:09:38.158 [ 00:09:38.158 { 00:09:38.158 "name": "27b2c475-c37f-4842-bd05-133e7c3e153b", 00:09:38.158 "aliases": [ 00:09:38.158 "lvs/lvol" 00:09:38.158 ], 00:09:38.158 "product_name": "Logical Volume", 00:09:38.158 "block_size": 4096, 00:09:38.158 "num_blocks": 38912, 00:09:38.158 "uuid": "27b2c475-c37f-4842-bd05-133e7c3e153b", 00:09:38.158 "assigned_rate_limits": { 00:09:38.158 "rw_ios_per_sec": 0, 00:09:38.158 "rw_mbytes_per_sec": 0, 00:09:38.158 "r_mbytes_per_sec": 0, 00:09:38.158 "w_mbytes_per_sec": 0 00:09:38.158 }, 00:09:38.158 "claimed": false, 00:09:38.158 "zoned": false, 00:09:38.158 "supported_io_types": { 00:09:38.158 "read": true, 00:09:38.158 "write": true, 00:09:38.158 "unmap": true, 00:09:38.158 "flush": false, 00:09:38.158 "reset": true, 00:09:38.158 "nvme_admin": false, 00:09:38.158 "nvme_io": false, 00:09:38.158 "nvme_io_md": false, 00:09:38.158 "write_zeroes": true, 00:09:38.158 "zcopy": false, 00:09:38.158 "get_zone_info": false, 00:09:38.158 "zone_management": false, 00:09:38.158 "zone_append": false, 00:09:38.158 "compare": false, 00:09:38.158 "compare_and_write": false, 00:09:38.158 "abort": false, 00:09:38.158 "seek_hole": true, 00:09:38.158 "seek_data": true, 00:09:38.158 "copy": false, 00:09:38.158 "nvme_iov_md": false 00:09:38.158 }, 00:09:38.158 "driver_specific": { 00:09:38.158 "lvol": { 00:09:38.158 "lvol_store_uuid": "5401fe96-9561-40af-93b0-c542a52c6ecf", 00:09:38.158 "base_bdev": "aio_bdev", 00:09:38.158 "thin_provision": false, 00:09:38.158 "num_allocated_clusters": 38, 00:09:38.158 "snapshot": false, 00:09:38.158 "clone": false, 00:09:38.158 "esnap_clone": false 00:09:38.158 } 00:09:38.158 } 00:09:38.158 } 00:09:38.158 ] 00:09:38.158 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:38.158 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5401fe96-9561-40af-93b0-c542a52c6ecf 00:09:38.158 16:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:38.728 16:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:38.728 16:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5401fe96-9561-40af-93b0-c542a52c6ecf 00:09:38.728 16:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:38.988 16:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:38.988 16:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 27b2c475-c37f-4842-bd05-133e7c3e153b 00:09:39.247 16:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5401fe96-9561-40af-93b0-c542a52c6ecf 00:09:39.506 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:39.765 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:40.332 ************************************ 00:09:40.332 END TEST lvs_grow_dirty 00:09:40.332 ************************************ 00:09:40.332 00:09:40.332 real 0m20.496s 00:09:40.332 user 0m40.712s 00:09:40.332 sys 0m8.938s 00:09:40.332 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.332 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:40.332 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:40.333 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:40.333 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:40.333 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:40.333 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:40.333 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:40.333 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:40.333 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:40.333 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:40.333 nvmf_trace.0 00:09:40.333 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:40.333 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:40.333 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.333 16:45:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:40.900 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.900 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:40.900 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.900 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.900 rmmod nvme_tcp 00:09:40.900 rmmod nvme_fabrics 00:09:40.900 rmmod nvme_keyring 00:09:40.900 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.900 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:40.900 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:40.901 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 77568 ']' 00:09:40.901 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 77568 00:09:40.901 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 77568 ']' 00:09:40.901 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 77568 00:09:40.901 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:40.901 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.901 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77568 00:09:40.901 killing process with pid 77568 00:09:40.901 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.901 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.901 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77568' 00:09:40.901 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 77568 00:09:40.901 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 77568 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.159 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.417 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:41.417 ************************************ 00:09:41.417 END TEST nvmf_lvs_grow 00:09:41.417 ************************************ 00:09:41.417 00:09:41.417 real 0m41.704s 00:09:41.417 user 1m5.891s 00:09:41.417 sys 0m12.493s 00:09:41.417 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.417 16:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:41.417 16:45:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.418 ************************************ 00:09:41.418 START TEST nvmf_bdev_io_wait 00:09:41.418 ************************************ 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:41.418 * Looking for test storage... 00:09:41.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:41.418 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:41.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.676 --rc genhtml_branch_coverage=1 00:09:41.676 --rc genhtml_function_coverage=1 00:09:41.676 --rc genhtml_legend=1 00:09:41.676 --rc geninfo_all_blocks=1 00:09:41.676 --rc geninfo_unexecuted_blocks=1 00:09:41.676 00:09:41.676 ' 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:41.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.676 --rc genhtml_branch_coverage=1 00:09:41.676 --rc genhtml_function_coverage=1 00:09:41.676 --rc genhtml_legend=1 00:09:41.676 --rc geninfo_all_blocks=1 00:09:41.676 --rc geninfo_unexecuted_blocks=1 00:09:41.676 00:09:41.676 ' 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:41.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.676 --rc genhtml_branch_coverage=1 00:09:41.676 --rc genhtml_function_coverage=1 00:09:41.676 --rc genhtml_legend=1 00:09:41.676 --rc geninfo_all_blocks=1 00:09:41.676 --rc geninfo_unexecuted_blocks=1 00:09:41.676 00:09:41.676 ' 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:41.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.676 --rc genhtml_branch_coverage=1 00:09:41.676 --rc genhtml_function_coverage=1 00:09:41.676 --rc genhtml_legend=1 00:09:41.676 --rc geninfo_all_blocks=1 00:09:41.676 --rc geninfo_unexecuted_blocks=1 00:09:41.676 00:09:41.676 ' 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:41.676 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:41.677 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:41.677 Cannot find device "nvmf_init_br" 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:41.677 Cannot find device "nvmf_init_br2" 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:41.677 Cannot find device "nvmf_tgt_br" 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.677 Cannot find device "nvmf_tgt_br2" 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:41.677 Cannot find device "nvmf_init_br" 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:41.677 Cannot find device "nvmf_init_br2" 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:41.677 Cannot find device "nvmf_tgt_br" 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:41.677 Cannot find device "nvmf_tgt_br2" 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:41.677 Cannot find device "nvmf_br" 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:41.677 Cannot find device "nvmf_init_if" 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:41.677 Cannot find device "nvmf_init_if2" 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:41.677 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:41.935 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.935 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:09:41.935 00:09:41.935 --- 10.0.0.3 ping statistics --- 00:09:41.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.935 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:41.935 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:41.935 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:09:41.935 00:09:41.935 --- 10.0.0.4 ping statistics --- 00:09:41.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.935 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:41.935 00:09:41.935 --- 10.0.0.1 ping statistics --- 00:09:41.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.935 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:41.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:09:41.935 00:09:41.935 --- 10.0.0.2 ping statistics --- 00:09:41.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.935 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.935 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.936 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.936 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:41.936 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.936 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.936 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.936 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=77942 00:09:41.936 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:41.936 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 77942 00:09:41.936 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 77942 ']' 00:09:41.936 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.936 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.936 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.936 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.936 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.936 [2024-11-29 16:45:05.702384] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:41.936 [2024-11-29 16:45:05.702670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.193 [2024-11-29 16:45:05.832275] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:42.193 [2024-11-29 16:45:05.857452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.193 [2024-11-29 16:45:05.880677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.193 [2024-11-29 16:45:05.880949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.193 [2024-11-29 16:45:05.881126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.193 [2024-11-29 16:45:05.881285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.193 [2024-11-29 16:45:05.881353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.193 [2024-11-29 16:45:05.882316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.193 [2024-11-29 16:45:05.882464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.193 [2024-11-29 16:45:05.882389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.193 [2024-11-29 16:45:05.882469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.193 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.193 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:42.193 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.193 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.193 16:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.452 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.452 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:42.452 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.452 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.452 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.452 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:42.452 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.452 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.452 [2024-11-29 16:45:06.053122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.453 [2024-11-29 16:45:06.067993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.453 Malloc0 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.453 [2024-11-29 16:45:06.115147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=77975 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=77977 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.453 { 00:09:42.453 "params": { 00:09:42.453 "name": "Nvme$subsystem", 00:09:42.453 "trtype": "$TEST_TRANSPORT", 00:09:42.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.453 "adrfam": "ipv4", 00:09:42.453 "trsvcid": "$NVMF_PORT", 00:09:42.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.453 "hdgst": ${hdgst:-false}, 00:09:42.453 "ddgst": ${ddgst:-false} 00:09:42.453 }, 00:09:42.453 "method": "bdev_nvme_attach_controller" 00:09:42.453 } 00:09:42.453 EOF 00:09:42.453 )") 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=77979 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=77981 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.453 { 00:09:42.453 "params": { 00:09:42.453 "name": "Nvme$subsystem", 00:09:42.453 "trtype": "$TEST_TRANSPORT", 00:09:42.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.453 "adrfam": "ipv4", 00:09:42.453 "trsvcid": "$NVMF_PORT", 00:09:42.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.453 "hdgst": ${hdgst:-false}, 00:09:42.453 "ddgst": ${ddgst:-false} 00:09:42.453 }, 00:09:42.453 "method": "bdev_nvme_attach_controller" 00:09:42.453 } 00:09:42.453 EOF 00:09:42.453 )") 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.453 { 00:09:42.453 "params": { 00:09:42.453 "name": "Nvme$subsystem", 00:09:42.453 "trtype": "$TEST_TRANSPORT", 00:09:42.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.453 "adrfam": "ipv4", 00:09:42.453 "trsvcid": "$NVMF_PORT", 00:09:42.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.453 "hdgst": ${hdgst:-false}, 00:09:42.453 "ddgst": ${ddgst:-false} 00:09:42.453 }, 00:09:42.453 "method": "bdev_nvme_attach_controller" 00:09:42.453 } 00:09:42.453 EOF 00:09:42.453 )") 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.453 { 00:09:42.453 "params": { 00:09:42.453 "name": "Nvme$subsystem", 00:09:42.453 "trtype": "$TEST_TRANSPORT", 00:09:42.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.453 "adrfam": "ipv4", 00:09:42.453 "trsvcid": "$NVMF_PORT", 00:09:42.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.453 "hdgst": ${hdgst:-false}, 00:09:42.453 "ddgst": ${ddgst:-false} 00:09:42.453 }, 00:09:42.453 "method": "bdev_nvme_attach_controller" 00:09:42.453 } 00:09:42.453 EOF 00:09:42.453 )") 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.453 "params": { 00:09:42.453 "name": "Nvme1", 00:09:42.453 "trtype": "tcp", 00:09:42.453 "traddr": "10.0.0.3", 00:09:42.453 "adrfam": "ipv4", 00:09:42.453 "trsvcid": "4420", 00:09:42.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.453 "hdgst": false, 00:09:42.453 "ddgst": false 00:09:42.453 }, 00:09:42.453 "method": "bdev_nvme_attach_controller" 00:09:42.453 }' 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.453 "params": { 00:09:42.453 "name": "Nvme1", 00:09:42.453 "trtype": "tcp", 00:09:42.453 "traddr": "10.0.0.3", 00:09:42.453 "adrfam": "ipv4", 00:09:42.453 "trsvcid": "4420", 00:09:42.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.453 "hdgst": false, 00:09:42.453 "ddgst": false 00:09:42.453 }, 00:09:42.453 "method": "bdev_nvme_attach_controller" 00:09:42.453 }' 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:42.453 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.453 "params": { 00:09:42.453 "name": "Nvme1", 00:09:42.453 "trtype": "tcp", 00:09:42.453 "traddr": "10.0.0.3", 00:09:42.453 "adrfam": "ipv4", 00:09:42.453 "trsvcid": "4420", 00:09:42.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.454 "hdgst": false, 00:09:42.454 "ddgst": false 00:09:42.454 }, 00:09:42.454 "method": "bdev_nvme_attach_controller" 00:09:42.454 }' 00:09:42.454 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:42.454 [2024-11-29 16:45:06.175053] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:42.454 [2024-11-29 16:45:06.175130] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:42.454 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:42.454 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.454 "params": { 00:09:42.454 "name": "Nvme1", 00:09:42.454 "trtype": "tcp", 00:09:42.454 "traddr": "10.0.0.3", 00:09:42.454 "adrfam": "ipv4", 00:09:42.454 "trsvcid": "4420", 00:09:42.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.454 "hdgst": false, 00:09:42.454 "ddgst": false 00:09:42.454 }, 00:09:42.454 "method": "bdev_nvme_attach_controller" 00:09:42.454 }' 00:09:42.454 [2024-11-29 16:45:06.185028] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:42.454 [2024-11-29 16:45:06.185108] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:42.454 16:45:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 77975 00:09:42.454 [2024-11-29 16:45:06.215576] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:42.454 [2024-11-29 16:45:06.216045] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:42.454 [2024-11-29 16:45:06.229628] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:42.454 [2024-11-29 16:45:06.229750] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:42.713 [2024-11-29 16:45:06.335177] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:42.713 [2024-11-29 16:45:06.367182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.713 [2024-11-29 16:45:06.382888] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:42.713 [2024-11-29 16:45:06.383704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:42.713 [2024-11-29 16:45:06.397608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.713 [2024-11-29 16:45:06.416252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.713 [2024-11-29 16:45:06.426250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:42.713 [2024-11-29 16:45:06.433071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:42.713 [2024-11-29 16:45:06.447245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.713 [2024-11-29 16:45:06.457801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.713 [2024-11-29 16:45:06.470751] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:42.713 [2024-11-29 16:45:06.475375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:42.713 [2024-11-29 16:45:06.489261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.713 [2024-11-29 16:45:06.503609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.972 Running I/O for 1 seconds... 00:09:42.972 [2024-11-29 16:45:06.519516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:42.972 [2024-11-29 16:45:06.533348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.972 Running I/O for 1 seconds... 00:09:42.972 Running I/O for 1 seconds... 00:09:42.972 Running I/O for 1 seconds... 00:09:43.910 155144.00 IOPS, 606.03 MiB/s 00:09:43.910 Latency(us) 00:09:43.910 [2024-11-29T16:45:07.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.910 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:43.911 Nvme1n1 : 1.00 154827.79 604.80 0.00 0.00 822.39 379.81 2025.66 00:09:43.911 [2024-11-29T16:45:07.703Z] =================================================================================================================== 00:09:43.911 [2024-11-29T16:45:07.703Z] Total : 154827.79 604.80 0.00 0.00 822.39 379.81 2025.66 00:09:43.911 9367.00 IOPS, 36.59 MiB/s 00:09:43.911 Latency(us) 00:09:43.911 [2024-11-29T16:45:07.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.911 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:43.911 Nvme1n1 : 1.01 9405.51 36.74 0.00 0.00 13542.24 8221.79 20018.27 00:09:43.911 [2024-11-29T16:45:07.703Z] =================================================================================================================== 00:09:43.911 [2024-11-29T16:45:07.703Z] Total : 9405.51 36.74 0.00 0.00 13542.24 8221.79 20018.27 00:09:43.911 8529.00 IOPS, 33.32 MiB/s 00:09:43.911 Latency(us) 00:09:43.911 [2024-11-29T16:45:07.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.911 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:43.911 Nvme1n1 : 1.01 8601.54 33.60 0.00 0.00 14815.21 7506.85 25380.31 00:09:43.911 [2024-11-29T16:45:07.703Z] =================================================================================================================== 00:09:43.911 [2024-11-29T16:45:07.703Z] Total : 8601.54 33.60 0.00 0.00 14815.21 7506.85 25380.31 00:09:43.911 7540.00 IOPS, 29.45 MiB/s 00:09:43.911 Latency(us) 00:09:43.911 [2024-11-29T16:45:07.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.911 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:43.911 Nvme1n1 : 1.01 7614.38 29.74 0.00 0.00 16728.06 4647.10 27644.28 00:09:43.911 [2024-11-29T16:45:07.703Z] =================================================================================================================== 00:09:43.911 [2024-11-29T16:45:07.703Z] Total : 7614.38 29.74 0.00 0.00 16728.06 4647.10 27644.28 00:09:44.170 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 77977 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 77979 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 77981 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.171 rmmod nvme_tcp 00:09:44.171 rmmod nvme_fabrics 00:09:44.171 rmmod nvme_keyring 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 77942 ']' 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 77942 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 77942 ']' 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 77942 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77942 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.171 killing process with pid 77942 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77942' 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 77942 00:09:44.171 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 77942 00:09:44.430 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:44.430 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:44.430 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:44.430 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:44.430 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:44.430 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:44.430 16:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.430 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:44.691 ************************************ 00:09:44.691 END TEST nvmf_bdev_io_wait 00:09:44.691 ************************************ 00:09:44.691 00:09:44.691 real 0m3.188s 00:09:44.691 user 0m12.609s 00:09:44.691 sys 0m2.054s 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.691 ************************************ 00:09:44.691 START TEST nvmf_queue_depth 00:09:44.691 ************************************ 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:44.691 * Looking for test storage... 00:09:44.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.691 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.691 --rc genhtml_branch_coverage=1 00:09:44.691 --rc genhtml_function_coverage=1 00:09:44.691 --rc genhtml_legend=1 00:09:44.691 --rc geninfo_all_blocks=1 00:09:44.691 --rc geninfo_unexecuted_blocks=1 00:09:44.692 00:09:44.692 ' 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.692 --rc genhtml_branch_coverage=1 00:09:44.692 --rc genhtml_function_coverage=1 00:09:44.692 --rc genhtml_legend=1 00:09:44.692 --rc geninfo_all_blocks=1 00:09:44.692 --rc geninfo_unexecuted_blocks=1 00:09:44.692 00:09:44.692 ' 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.692 --rc genhtml_branch_coverage=1 00:09:44.692 --rc genhtml_function_coverage=1 00:09:44.692 --rc genhtml_legend=1 00:09:44.692 --rc geninfo_all_blocks=1 00:09:44.692 --rc geninfo_unexecuted_blocks=1 00:09:44.692 00:09:44.692 ' 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.692 --rc genhtml_branch_coverage=1 00:09:44.692 --rc genhtml_function_coverage=1 00:09:44.692 --rc genhtml_legend=1 00:09:44.692 --rc geninfo_all_blocks=1 00:09:44.692 --rc geninfo_unexecuted_blocks=1 00:09:44.692 00:09:44.692 ' 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.692 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.692 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.952 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:44.952 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:44.952 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:44.952 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:44.952 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:44.952 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:44.953 Cannot find device "nvmf_init_br" 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:44.953 Cannot find device "nvmf_init_br2" 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:44.953 Cannot find device "nvmf_tgt_br" 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:44.953 Cannot find device "nvmf_tgt_br2" 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:44.953 Cannot find device "nvmf_init_br" 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:44.953 Cannot find device "nvmf_init_br2" 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:44.953 Cannot find device "nvmf_tgt_br" 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:44.953 Cannot find device "nvmf_tgt_br2" 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:44.953 Cannot find device "nvmf_br" 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:44.953 Cannot find device "nvmf_init_if" 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:44.953 Cannot find device "nvmf_init_if2" 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:44.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:44.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:44.953 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:45.213 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:45.213 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:09:45.213 00:09:45.213 --- 10.0.0.3 ping statistics --- 00:09:45.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.213 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:45.213 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:45.213 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:09:45.213 00:09:45.213 --- 10.0.0.4 ping statistics --- 00:09:45.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.213 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:45.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:45.213 00:09:45.213 --- 10.0.0.1 ping statistics --- 00:09:45.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.213 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:45.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:09:45.213 00:09:45.213 --- 10.0.0.2 ping statistics --- 00:09:45.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.213 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=78235 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 78235 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 78235 ']' 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.213 16:45:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.213 [2024-11-29 16:45:08.977114] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:45.213 [2024-11-29 16:45:08.977192] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.472 [2024-11-29 16:45:09.100130] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:45.472 [2024-11-29 16:45:09.132182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.472 [2024-11-29 16:45:09.154575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.472 [2024-11-29 16:45:09.154640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.472 [2024-11-29 16:45:09.154653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.472 [2024-11-29 16:45:09.154664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.472 [2024-11-29 16:45:09.154672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.472 [2024-11-29 16:45:09.155023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.472 [2024-11-29 16:45:09.187780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:46.407 16:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.407 16:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:46.407 16:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.407 16:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.407 16:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.407 [2024-11-29 16:45:10.006443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.407 Malloc0 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.407 [2024-11-29 16:45:10.045882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=78267 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 78267 /var/tmp/bdevperf.sock 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 78267 ']' 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:46.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.407 16:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.407 [2024-11-29 16:45:10.110554] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:46.407 [2024-11-29 16:45:10.110642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78267 ] 00:09:46.666 [2024-11-29 16:45:10.238426] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:46.666 [2024-11-29 16:45:10.271371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.666 [2024-11-29 16:45:10.295211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.666 [2024-11-29 16:45:10.328650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:47.645 16:45:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.645 16:45:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:47.645 16:45:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:47.645 16:45:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.645 16:45:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.645 NVMe0n1 00:09:47.645 16:45:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.645 16:45:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:47.645 Running I/O for 10 seconds... 00:09:49.514 6144.00 IOPS, 24.00 MiB/s [2024-11-29T16:45:14.681Z] 7187.00 IOPS, 28.07 MiB/s [2024-11-29T16:45:15.257Z] 7605.00 IOPS, 29.71 MiB/s [2024-11-29T16:45:16.636Z] 7995.50 IOPS, 31.23 MiB/s [2024-11-29T16:45:17.573Z] 8189.60 IOPS, 31.99 MiB/s [2024-11-29T16:45:18.510Z] 8205.50 IOPS, 32.05 MiB/s [2024-11-29T16:45:19.446Z] 8270.43 IOPS, 32.31 MiB/s [2024-11-29T16:45:20.379Z] 8344.62 IOPS, 32.60 MiB/s [2024-11-29T16:45:21.314Z] 8445.44 IOPS, 32.99 MiB/s [2024-11-29T16:45:21.573Z] 8546.60 IOPS, 33.39 MiB/s 00:09:57.781 Latency(us) 00:09:57.781 [2024-11-29T16:45:21.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.781 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:57.781 Verification LBA range: start 0x0 length 0x4000 00:09:57.781 NVMe0n1 : 10.07 8582.45 33.53 0.00 0.00 118786.52 11439.01 163005.91 00:09:57.781 [2024-11-29T16:45:21.573Z] =================================================================================================================== 00:09:57.781 [2024-11-29T16:45:21.573Z] Total : 8582.45 33.53 0.00 0.00 118786.52 11439.01 163005.91 00:09:57.781 { 00:09:57.781 "results": [ 00:09:57.781 { 00:09:57.781 "job": "NVMe0n1", 00:09:57.781 "core_mask": "0x1", 00:09:57.781 "workload": "verify", 00:09:57.781 "status": "finished", 00:09:57.781 "verify_range": { 00:09:57.781 "start": 0, 00:09:57.781 "length": 16384 00:09:57.781 }, 00:09:57.781 "queue_depth": 1024, 00:09:57.781 "io_size": 4096, 00:09:57.781 "runtime": 10.065428, 00:09:57.781 "iops": 8582.446767290969, 00:09:57.781 "mibps": 33.525182684730346, 00:09:57.781 "io_failed": 0, 00:09:57.781 "io_timeout": 0, 00:09:57.781 "avg_latency_us": 118786.52420699483, 00:09:57.781 "min_latency_us": 11439.01090909091, 00:09:57.781 "max_latency_us": 163005.90545454546 00:09:57.781 } 00:09:57.781 ], 00:09:57.781 "core_count": 1 00:09:57.781 } 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 78267 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 78267 ']' 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 78267 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78267 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.781 killing process with pid 78267 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78267' 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 78267 00:09:57.781 Received shutdown signal, test time was about 10.000000 seconds 00:09:57.781 00:09:57.781 Latency(us) 00:09:57.781 [2024-11-29T16:45:21.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.781 [2024-11-29T16:45:21.573Z] =================================================================================================================== 00:09:57.781 [2024-11-29T16:45:21.573Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 78267 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.781 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.781 rmmod nvme_tcp 00:09:57.781 rmmod nvme_fabrics 00:09:58.040 rmmod nvme_keyring 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 78235 ']' 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 78235 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 78235 ']' 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 78235 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78235 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78235' 00:09:58.040 killing process with pid 78235 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 78235 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 78235 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:58.040 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:58.299 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:58.299 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:58.299 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:58.299 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:58.299 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:58.299 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:58.299 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:58.299 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:58.299 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:58.299 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:58.300 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.300 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.300 16:45:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.300 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:58.300 00:09:58.300 real 0m13.739s 00:09:58.300 user 0m23.553s 00:09:58.300 sys 0m2.091s 00:09:58.300 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.300 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.300 ************************************ 00:09:58.300 END TEST nvmf_queue_depth 00:09:58.300 ************************************ 00:09:58.300 16:45:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:58.300 16:45:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.300 16:45:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.300 16:45:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.300 ************************************ 00:09:58.300 START TEST nvmf_target_multipath 00:09:58.300 ************************************ 00:09:58.300 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:58.560 * Looking for test storage... 00:09:58.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:58.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.560 --rc genhtml_branch_coverage=1 00:09:58.560 --rc genhtml_function_coverage=1 00:09:58.560 --rc genhtml_legend=1 00:09:58.560 --rc geninfo_all_blocks=1 00:09:58.560 --rc geninfo_unexecuted_blocks=1 00:09:58.560 00:09:58.560 ' 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:58.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.560 --rc genhtml_branch_coverage=1 00:09:58.560 --rc genhtml_function_coverage=1 00:09:58.560 --rc genhtml_legend=1 00:09:58.560 --rc geninfo_all_blocks=1 00:09:58.560 --rc geninfo_unexecuted_blocks=1 00:09:58.560 00:09:58.560 ' 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:58.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.560 --rc genhtml_branch_coverage=1 00:09:58.560 --rc genhtml_function_coverage=1 00:09:58.560 --rc genhtml_legend=1 00:09:58.560 --rc geninfo_all_blocks=1 00:09:58.560 --rc geninfo_unexecuted_blocks=1 00:09:58.560 00:09:58.560 ' 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:58.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.560 --rc genhtml_branch_coverage=1 00:09:58.560 --rc genhtml_function_coverage=1 00:09:58.560 --rc genhtml_legend=1 00:09:58.560 --rc geninfo_all_blocks=1 00:09:58.560 --rc geninfo_unexecuted_blocks=1 00:09:58.560 00:09:58.560 ' 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.560 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.561 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:58.561 Cannot find device "nvmf_init_br" 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:58.561 Cannot find device "nvmf_init_br2" 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:58.561 Cannot find device "nvmf_tgt_br" 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:58.561 Cannot find device "nvmf_tgt_br2" 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:58.561 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:58.820 Cannot find device "nvmf_init_br" 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:58.820 Cannot find device "nvmf_init_br2" 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:58.820 Cannot find device "nvmf_tgt_br" 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:58.820 Cannot find device "nvmf_tgt_br2" 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:58.820 Cannot find device "nvmf_br" 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:58.820 Cannot find device "nvmf_init_if" 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:58.820 Cannot find device "nvmf_init_if2" 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:58.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:58.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:58.820 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:59.080 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:59.080 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:09:59.080 00:09:59.080 --- 10.0.0.3 ping statistics --- 00:09:59.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.080 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:59.080 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:59.080 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:09:59.080 00:09:59.080 --- 10.0.0.4 ping statistics --- 00:09:59.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.080 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:59.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:09:59.080 00:09:59.080 --- 10.0.0.1 ping statistics --- 00:09:59.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.080 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:59.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:09:59.080 00:09:59.080 --- 10.0.0.2 ping statistics --- 00:09:59.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.080 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.080 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=78643 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 78643 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 78643 ']' 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.081 16:45:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:59.081 [2024-11-29 16:45:22.748395] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:09:59.081 [2024-11-29 16:45:22.748480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.339 [2024-11-29 16:45:22.873754] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:59.339 [2024-11-29 16:45:22.902055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.339 [2024-11-29 16:45:22.924563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.339 [2024-11-29 16:45:22.924625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.339 [2024-11-29 16:45:22.924636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.339 [2024-11-29 16:45:22.924644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.339 [2024-11-29 16:45:22.924651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.339 [2024-11-29 16:45:22.925344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.339 [2024-11-29 16:45:22.925441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.339 [2024-11-29 16:45:22.925498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.339 [2024-11-29 16:45:22.925502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.339 [2024-11-29 16:45:22.974192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:59.339 16:45:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.339 16:45:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:09:59.339 16:45:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.339 16:45:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:59.339 16:45:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:59.339 16:45:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.339 16:45:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:59.598 [2024-11-29 16:45:23.302271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.598 16:45:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:59.856 Malloc0 00:09:59.856 16:45:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:00.112 16:45:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:00.678 16:45:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:00.678 [2024-11-29 16:45:24.457572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:00.936 16:45:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:10:00.936 [2024-11-29 16:45:24.705755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:10:00.936 16:45:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:01.195 16:45:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:10:01.195 16:45:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:01.195 16:45:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:10:01.195 16:45:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:01.195 16:45:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:01.195 16:45:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:10:03.749 16:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:03.749 16:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:03.749 16:45:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=78725 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:03.749 16:45:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:03.749 [global] 00:10:03.749 thread=1 00:10:03.749 invalidate=1 00:10:03.749 rw=randrw 00:10:03.749 time_based=1 00:10:03.749 runtime=6 00:10:03.749 ioengine=libaio 00:10:03.749 direct=1 00:10:03.749 bs=4096 00:10:03.749 iodepth=128 00:10:03.749 norandommap=0 00:10:03.749 numjobs=1 00:10:03.749 00:10:03.749 verify_dump=1 00:10:03.749 verify_backlog=512 00:10:03.749 verify_state_save=0 00:10:03.749 do_verify=1 00:10:03.749 verify=crc32c-intel 00:10:03.749 [job0] 00:10:03.749 filename=/dev/nvme0n1 00:10:03.749 Could not set queue depth (nvme0n1) 00:10:03.749 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.749 fio-3.35 00:10:03.749 Starting 1 thread 00:10:04.315 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:04.573 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:04.831 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:04.831 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:04.831 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:04.831 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:04.831 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:04.831 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:04.831 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:04.831 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:04.831 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:04.831 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:04.831 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:04.831 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:04.831 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:05.397 16:45:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:05.655 16:45:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:05.655 16:45:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:05.655 16:45:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:05.655 16:45:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:05.655 16:45:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:05.655 16:45:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:05.655 16:45:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:05.655 16:45:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:05.655 16:45:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:05.655 16:45:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:05.655 16:45:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:05.655 16:45:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:05.655 16:45:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 78725 00:10:09.840 00:10:09.840 job0: (groupid=0, jobs=1): err= 0: pid=78751: Fri Nov 29 16:45:33 2024 00:10:09.840 read: IOPS=10.2k, BW=39.7MiB/s (41.6MB/s)(238MiB/6007msec) 00:10:09.840 slat (usec): min=2, max=6722, avg=57.37, stdev=224.01 00:10:09.840 clat (usec): min=1431, max=15817, avg=8558.42, stdev=1474.87 00:10:09.840 lat (usec): min=1447, max=15827, avg=8615.79, stdev=1478.66 00:10:09.840 clat percentiles (usec): 00:10:09.840 | 1.00th=[ 4490], 5.00th=[ 6521], 10.00th=[ 7308], 20.00th=[ 7832], 00:10:09.840 | 30.00th=[ 8094], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:10:09.840 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9765], 95.00th=[11994], 00:10:09.840 | 99.00th=[13435], 99.50th=[13829], 99.90th=[14615], 99.95th=[14746], 00:10:09.840 | 99.99th=[15401] 00:10:09.840 bw ( KiB/s): min= 3848, max=27056, per=51.59%, avg=20948.91, stdev=7299.87, samples=11 00:10:09.840 iops : min= 962, max= 6764, avg=5237.18, stdev=1824.95, samples=11 00:10:09.840 write: IOPS=6038, BW=23.6MiB/s (24.7MB/s)(125MiB/5314msec); 0 zone resets 00:10:09.840 slat (usec): min=3, max=1867, avg=67.57, stdev=162.11 00:10:09.840 clat (usec): min=1178, max=15548, avg=7486.72, stdev=1318.79 00:10:09.840 lat (usec): min=1286, max=15572, avg=7554.29, stdev=1323.59 00:10:09.840 clat percentiles (usec): 00:10:09.840 | 1.00th=[ 3556], 5.00th=[ 4490], 10.00th=[ 5866], 20.00th=[ 6980], 00:10:09.840 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7832], 00:10:09.840 | 70.00th=[ 8029], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8979], 00:10:09.840 | 99.00th=[11600], 99.50th=[12256], 99.90th=[13829], 99.95th=[14353], 00:10:09.840 | 99.99th=[15270] 00:10:09.841 bw ( KiB/s): min= 4096, max=26632, per=87.03%, avg=21020.36, stdev=7120.07, samples=11 00:10:09.841 iops : min= 1024, max= 6658, avg=5255.09, stdev=1780.02, samples=11 00:10:09.841 lat (msec) : 2=0.03%, 4=1.21%, 10=92.52%, 20=6.24% 00:10:09.841 cpu : usr=5.59%, sys=21.65%, ctx=5529, majf=0, minf=102 00:10:09.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:09.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.841 issued rwts: total=60980,32088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.841 00:10:09.841 Run status group 0 (all jobs): 00:10:09.841 READ: bw=39.7MiB/s (41.6MB/s), 39.7MiB/s-39.7MiB/s (41.6MB/s-41.6MB/s), io=238MiB (250MB), run=6007-6007msec 00:10:09.841 WRITE: bw=23.6MiB/s (24.7MB/s), 23.6MiB/s-23.6MiB/s (24.7MB/s-24.7MB/s), io=125MiB (131MB), run=5314-5314msec 00:10:09.841 00:10:09.841 Disk stats (read/write): 00:10:09.841 nvme0n1: ios=60065/31467, merge=0/0, ticks=493783/221119, in_queue=714902, util=98.60% 00:10:09.841 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:10.099 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=78833 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:10.358 16:45:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:10.358 [global] 00:10:10.359 thread=1 00:10:10.359 invalidate=1 00:10:10.359 rw=randrw 00:10:10.359 time_based=1 00:10:10.359 runtime=6 00:10:10.359 ioengine=libaio 00:10:10.359 direct=1 00:10:10.359 bs=4096 00:10:10.359 iodepth=128 00:10:10.359 norandommap=0 00:10:10.359 numjobs=1 00:10:10.359 00:10:10.359 verify_dump=1 00:10:10.359 verify_backlog=512 00:10:10.359 verify_state_save=0 00:10:10.359 do_verify=1 00:10:10.359 verify=crc32c-intel 00:10:10.359 [job0] 00:10:10.359 filename=/dev/nvme0n1 00:10:10.359 Could not set queue depth (nvme0n1) 00:10:10.359 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.359 fio-3.35 00:10:10.359 Starting 1 thread 00:10:11.294 16:45:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:11.552 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:11.810 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:11.811 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:11.811 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:11.811 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:11.811 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:11.811 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:11.811 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:11.811 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:11.811 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:11.811 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:11.811 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:11.811 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:11.811 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:12.070 16:45:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:12.328 16:45:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:12.329 16:45:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:12.329 16:45:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:12.329 16:45:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:12.329 16:45:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:12.329 16:45:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:12.329 16:45:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:12.329 16:45:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:12.329 16:45:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:12.329 16:45:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:12.329 16:45:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:12.329 16:45:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:12.329 16:45:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 78833 00:10:16.516 00:10:16.516 job0: (groupid=0, jobs=1): err= 0: pid=78854: Fri Nov 29 16:45:40 2024 00:10:16.516 read: IOPS=11.1k, BW=43.3MiB/s (45.4MB/s)(260MiB/6003msec) 00:10:16.516 slat (usec): min=7, max=6613, avg=44.24, stdev=201.49 00:10:16.517 clat (usec): min=723, max=19026, avg=7930.93, stdev=2024.17 00:10:16.517 lat (usec): min=759, max=19047, avg=7975.17, stdev=2040.37 00:10:16.517 clat percentiles (usec): 00:10:16.517 | 1.00th=[ 3064], 5.00th=[ 4359], 10.00th=[ 5211], 20.00th=[ 6259], 00:10:16.517 | 30.00th=[ 7177], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8455], 00:10:16.517 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9765], 95.00th=[11600], 00:10:16.517 | 99.00th=[13435], 99.50th=[13829], 99.90th=[15139], 99.95th=[16581], 00:10:16.517 | 99.99th=[18220] 00:10:16.517 bw ( KiB/s): min=11696, max=37040, per=52.65%, avg=23364.36, stdev=8142.21, samples=11 00:10:16.517 iops : min= 2924, max= 9260, avg=5841.27, stdev=2035.88, samples=11 00:10:16.517 write: IOPS=6396, BW=25.0MiB/s (26.2MB/s)(137MiB/5480msec); 0 zone resets 00:10:16.517 slat (usec): min=14, max=2027, avg=56.99, stdev=144.90 00:10:16.517 clat (usec): min=436, max=18020, avg=6676.57, stdev=1899.59 00:10:16.517 lat (usec): min=504, max=18045, avg=6733.56, stdev=1914.94 00:10:16.517 clat percentiles (usec): 00:10:16.517 | 1.00th=[ 2671], 5.00th=[ 3425], 10.00th=[ 3884], 20.00th=[ 4621], 00:10:16.517 | 30.00th=[ 5407], 40.00th=[ 6783], 50.00th=[ 7308], 60.00th=[ 7635], 00:10:16.517 | 70.00th=[ 7898], 80.00th=[ 8225], 90.00th=[ 8586], 95.00th=[ 8848], 00:10:16.517 | 99.00th=[11076], 99.50th=[11994], 99.90th=[13698], 99.95th=[15664], 00:10:16.517 | 99.99th=[16909] 00:10:16.517 bw ( KiB/s): min=12176, max=37584, per=91.32%, avg=23363.64, stdev=7973.64, samples=11 00:10:16.517 iops : min= 3044, max= 9396, avg=5840.91, stdev=1993.41, samples=11 00:10:16.517 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:16.517 lat (msec) : 2=0.12%, 4=6.08%, 10=87.80%, 20=5.98% 00:10:16.517 cpu : usr=6.01%, sys=21.94%, ctx=5673, majf=0, minf=78 00:10:16.517 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:16.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.517 issued rwts: total=66600,35051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.517 00:10:16.517 Run status group 0 (all jobs): 00:10:16.517 READ: bw=43.3MiB/s (45.4MB/s), 43.3MiB/s-43.3MiB/s (45.4MB/s-45.4MB/s), io=260MiB (273MB), run=6003-6003msec 00:10:16.517 WRITE: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=137MiB (144MB), run=5480-5480msec 00:10:16.517 00:10:16.517 Disk stats (read/write): 00:10:16.517 nvme0n1: ios=65684/34430, merge=0/0, ticks=498976/215018, in_queue=713994, util=98.61% 00:10:16.517 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:16.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:16.517 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:16.517 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:10:16.517 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:16.517 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.517 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.517 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:16.517 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:10:16.517 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.084 rmmod nvme_tcp 00:10:17.084 rmmod nvme_fabrics 00:10:17.084 rmmod nvme_keyring 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 78643 ']' 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 78643 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 78643 ']' 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 78643 00:10:17.084 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78643 00:10:17.085 killing process with pid 78643 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78643' 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 78643 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 78643 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:17.085 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:17.353 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:17.353 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:17.353 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.353 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:17.353 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:17.353 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:17.353 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:17.353 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:17.353 16:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:17.353 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:17.353 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.353 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.353 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:17.353 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.353 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.353 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.353 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:17.353 00:10:17.353 real 0m19.027s 00:10:17.353 user 1m10.381s 00:10:17.353 sys 0m9.939s 00:10:17.353 ************************************ 00:10:17.353 END TEST nvmf_target_multipath 00:10:17.353 ************************************ 00:10:17.353 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.353 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.669 ************************************ 00:10:17.669 START TEST nvmf_zcopy 00:10:17.669 ************************************ 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:17.669 * Looking for test storage... 00:10:17.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.669 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.670 --rc genhtml_branch_coverage=1 00:10:17.670 --rc genhtml_function_coverage=1 00:10:17.670 --rc genhtml_legend=1 00:10:17.670 --rc geninfo_all_blocks=1 00:10:17.670 --rc geninfo_unexecuted_blocks=1 00:10:17.670 00:10:17.670 ' 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.670 --rc genhtml_branch_coverage=1 00:10:17.670 --rc genhtml_function_coverage=1 00:10:17.670 --rc genhtml_legend=1 00:10:17.670 --rc geninfo_all_blocks=1 00:10:17.670 --rc geninfo_unexecuted_blocks=1 00:10:17.670 00:10:17.670 ' 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.670 --rc genhtml_branch_coverage=1 00:10:17.670 --rc genhtml_function_coverage=1 00:10:17.670 --rc genhtml_legend=1 00:10:17.670 --rc geninfo_all_blocks=1 00:10:17.670 --rc geninfo_unexecuted_blocks=1 00:10:17.670 00:10:17.670 ' 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.670 --rc genhtml_branch_coverage=1 00:10:17.670 --rc genhtml_function_coverage=1 00:10:17.670 --rc genhtml_legend=1 00:10:17.670 --rc geninfo_all_blocks=1 00:10:17.670 --rc geninfo_unexecuted_blocks=1 00:10:17.670 00:10:17.670 ' 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.670 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:17.670 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:17.671 Cannot find device "nvmf_init_br" 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:17.671 Cannot find device "nvmf_init_br2" 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:17.671 Cannot find device "nvmf_tgt_br" 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.671 Cannot find device "nvmf_tgt_br2" 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:17.671 Cannot find device "nvmf_init_br" 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:17.671 Cannot find device "nvmf_init_br2" 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:17.671 Cannot find device "nvmf_tgt_br" 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:17.671 Cannot find device "nvmf_tgt_br2" 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:17.671 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:17.940 Cannot find device "nvmf_br" 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:17.940 Cannot find device "nvmf_init_if" 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:17.940 Cannot find device "nvmf_init_if2" 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:17.940 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:17.940 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.140 ms 00:10:17.940 00:10:17.940 --- 10.0.0.3 ping statistics --- 00:10:17.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.940 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:17.940 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:17.940 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:10:17.940 00:10:17.940 --- 10.0.0.4 ping statistics --- 00:10:17.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.940 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:17.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:17.940 00:10:17.940 --- 10.0.0.1 ping statistics --- 00:10:17.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.940 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:17.940 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:17.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:10:17.940 00:10:17.940 --- 10.0.0.2 ping statistics --- 00:10:17.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.940 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:18.199 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=79160 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 79160 00:10:18.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 79160 ']' 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.200 16:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.200 [2024-11-29 16:45:41.811489] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:18.200 [2024-11-29 16:45:41.811858] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.200 [2024-11-29 16:45:41.936971] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:18.200 [2024-11-29 16:45:41.956236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.200 [2024-11-29 16:45:41.976194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.200 [2024-11-29 16:45:41.976248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.200 [2024-11-29 16:45:41.976274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.200 [2024-11-29 16:45:41.976282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.200 [2024-11-29 16:45:41.976289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.200 [2024-11-29 16:45:41.976652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.459 [2024-11-29 16:45:42.006411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.460 [2024-11-29 16:45:42.140069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.460 [2024-11-29 16:45:42.156199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.460 malloc0 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:18.460 { 00:10:18.460 "params": { 00:10:18.460 "name": "Nvme$subsystem", 00:10:18.460 "trtype": "$TEST_TRANSPORT", 00:10:18.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.460 "adrfam": "ipv4", 00:10:18.460 "trsvcid": "$NVMF_PORT", 00:10:18.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.460 "hdgst": ${hdgst:-false}, 00:10:18.460 "ddgst": ${ddgst:-false} 00:10:18.460 }, 00:10:18.460 "method": "bdev_nvme_attach_controller" 00:10:18.460 } 00:10:18.460 EOF 00:10:18.460 )") 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:18.460 16:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:18.460 "params": { 00:10:18.460 "name": "Nvme1", 00:10:18.460 "trtype": "tcp", 00:10:18.460 "traddr": "10.0.0.3", 00:10:18.460 "adrfam": "ipv4", 00:10:18.460 "trsvcid": "4420", 00:10:18.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.460 "hdgst": false, 00:10:18.460 "ddgst": false 00:10:18.460 }, 00:10:18.460 "method": "bdev_nvme_attach_controller" 00:10:18.460 }' 00:10:18.719 [2024-11-29 16:45:42.250253] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:18.719 [2024-11-29 16:45:42.250398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79185 ] 00:10:18.719 [2024-11-29 16:45:42.377730] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:18.719 [2024-11-29 16:45:42.414669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.719 [2024-11-29 16:45:42.439209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.719 [2024-11-29 16:45:42.480850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:18.979 Running I/O for 10 seconds... 00:10:20.849 6274.00 IOPS, 49.02 MiB/s [2024-11-29T16:45:46.017Z] 6275.00 IOPS, 49.02 MiB/s [2024-11-29T16:45:46.955Z] 6309.67 IOPS, 49.29 MiB/s [2024-11-29T16:45:47.891Z] 6313.75 IOPS, 49.33 MiB/s [2024-11-29T16:45:48.828Z] 6325.20 IOPS, 49.42 MiB/s [2024-11-29T16:45:49.764Z] 6314.33 IOPS, 49.33 MiB/s [2024-11-29T16:45:50.702Z] 6283.71 IOPS, 49.09 MiB/s [2024-11-29T16:45:51.641Z] 6266.25 IOPS, 48.96 MiB/s [2024-11-29T16:45:53.019Z] 6254.89 IOPS, 48.87 MiB/s [2024-11-29T16:45:53.019Z] 6219.10 IOPS, 48.59 MiB/s 00:10:29.227 Latency(us) 00:10:29.227 [2024-11-29T16:45:53.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.227 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:29.227 Verification LBA range: start 0x0 length 0x1000 00:10:29.227 Nvme1n1 : 10.01 6222.30 48.61 0.00 0.00 20505.93 2398.02 36938.47 00:10:29.227 [2024-11-29T16:45:53.019Z] =================================================================================================================== 00:10:29.227 [2024-11-29T16:45:53.019Z] Total : 6222.30 48.61 0.00 0.00 20505.93 2398.02 36938.47 00:10:29.227 16:45:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=79301 00:10:29.227 16:45:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:29.227 16:45:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.227 16:45:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:29.227 16:45:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:29.227 16:45:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:29.227 16:45:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:29.227 16:45:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:29.227 16:45:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:29.227 { 00:10:29.227 "params": { 00:10:29.227 "name": "Nvme$subsystem", 00:10:29.227 "trtype": "$TEST_TRANSPORT", 00:10:29.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.227 "adrfam": "ipv4", 00:10:29.227 "trsvcid": "$NVMF_PORT", 00:10:29.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.227 "hdgst": ${hdgst:-false}, 00:10:29.227 "ddgst": ${ddgst:-false} 00:10:29.227 }, 00:10:29.227 "method": "bdev_nvme_attach_controller" 00:10:29.227 } 00:10:29.227 EOF 00:10:29.227 )") 00:10:29.227 16:45:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:29.227 [2024-11-29 16:45:52.731056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.227 [2024-11-29 16:45:52.731253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.227 16:45:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:29.227 16:45:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:29.227 16:45:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:29.227 "params": { 00:10:29.227 "name": "Nvme1", 00:10:29.227 "trtype": "tcp", 00:10:29.227 "traddr": "10.0.0.3", 00:10:29.227 "adrfam": "ipv4", 00:10:29.227 "trsvcid": "4420", 00:10:29.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:29.227 "hdgst": false, 00:10:29.227 "ddgst": false 00:10:29.227 }, 00:10:29.227 "method": "bdev_nvme_attach_controller" 00:10:29.227 }' 00:10:29.227 [2024-11-29 16:45:52.743018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.227 [2024-11-29 16:45:52.743175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.227 [2024-11-29 16:45:52.755012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.227 [2024-11-29 16:45:52.755171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.227 [2024-11-29 16:45:52.767013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.227 [2024-11-29 16:45:52.767194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.227 [2024-11-29 16:45:52.779024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.227 [2024-11-29 16:45:52.779167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.227 [2024-11-29 16:45:52.791013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.227 [2024-11-29 16:45:52.791175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.227 [2024-11-29 16:45:52.792317] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:29.227 [2024-11-29 16:45:52.792625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79301 ] 00:10:29.227 [2024-11-29 16:45:52.803019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.227 [2024-11-29 16:45:52.803050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.227 [2024-11-29 16:45:52.815025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.227 [2024-11-29 16:45:52.815056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.227 [2024-11-29 16:45:52.827030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.227 [2024-11-29 16:45:52.827062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.227 [2024-11-29 16:45:52.839023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.227 [2024-11-29 16:45:52.839053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.227 [2024-11-29 16:45:52.851033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.227 [2024-11-29 16:45:52.851062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.227 [2024-11-29 16:45:52.863029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.227 [2024-11-29 16:45:52.863058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.227 [2024-11-29 16:45:52.875036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.227 [2024-11-29 16:45:52.875069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.227 [2024-11-29 16:45:52.887035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.227 [2024-11-29 16:45:52.887064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.228 [2024-11-29 16:45:52.899045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.228 [2024-11-29 16:45:52.899222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.228 [2024-11-29 16:45:52.911051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.228 [2024-11-29 16:45:52.911081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.228 [2024-11-29 16:45:52.917051] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:29.228 [2024-11-29 16:45:52.923053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.228 [2024-11-29 16:45:52.923082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.228 [2024-11-29 16:45:52.935055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.228 [2024-11-29 16:45:52.935085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.228 [2024-11-29 16:45:52.944450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.228 [2024-11-29 16:45:52.947061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.228 [2024-11-29 16:45:52.947091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.228 [2024-11-29 16:45:52.959092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.228 [2024-11-29 16:45:52.959132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.228 [2024-11-29 16:45:52.965252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.228 [2024-11-29 16:45:52.971072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.228 [2024-11-29 16:45:52.971102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.228 [2024-11-29 16:45:52.983103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.228 [2024-11-29 16:45:52.983144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.228 [2024-11-29 16:45:52.995104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.228 [2024-11-29 16:45:52.995147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.228 [2024-11-29 16:45:53.003464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:29.228 [2024-11-29 16:45:53.007101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.228 [2024-11-29 16:45:53.007136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.486 [2024-11-29 16:45:53.019112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.486 [2024-11-29 16:45:53.019150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.486 [2024-11-29 16:45:53.031093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.486 [2024-11-29 16:45:53.031126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.486 [2024-11-29 16:45:53.043335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.486 [2024-11-29 16:45:53.043390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.486 [2024-11-29 16:45:53.055333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.486 [2024-11-29 16:45:53.055389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.486 [2024-11-29 16:45:53.067361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.486 [2024-11-29 16:45:53.067397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.487 [2024-11-29 16:45:53.079368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.487 [2024-11-29 16:45:53.079404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.487 [2024-11-29 16:45:53.091365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.487 [2024-11-29 16:45:53.091397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.487 [2024-11-29 16:45:53.103418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.487 [2024-11-29 16:45:53.103456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.487 Running I/O for 5 seconds... 00:10:29.487 [2024-11-29 16:45:53.115519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.487 [2024-11-29 16:45:53.115571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.487 [2024-11-29 16:45:53.132578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.487 [2024-11-29 16:45:53.132618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.487 [2024-11-29 16:45:53.149479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.487 [2024-11-29 16:45:53.149532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.487 [2024-11-29 16:45:53.165516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.487 [2024-11-29 16:45:53.165553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.487 [2024-11-29 16:45:53.182277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.487 [2024-11-29 16:45:53.182361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.487 [2024-11-29 16:45:53.199427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.487 [2024-11-29 16:45:53.199464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.487 [2024-11-29 16:45:53.216276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.487 [2024-11-29 16:45:53.216399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.487 [2024-11-29 16:45:53.231921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.487 [2024-11-29 16:45:53.231982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.487 [2024-11-29 16:45:53.247540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.487 [2024-11-29 16:45:53.247614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.487 [2024-11-29 16:45:53.265192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.487 [2024-11-29 16:45:53.265553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.281080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.281144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.290851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.290904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.306631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.306685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.324302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.324637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.339806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.340059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.356037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.356116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.372850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.372900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.389186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.389245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.405728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.405781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.421912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.421962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.439872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.439950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.454706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.454940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.464158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.464194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.480292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.480373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.497168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.497207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.513970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.514163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.746 [2024-11-29 16:45:53.529445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.746 [2024-11-29 16:45:53.529482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.545234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.545285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.563617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.563958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.579098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.579466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.589128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.589179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.604510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.604565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.623776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.623840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.637907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.638206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.652758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.652980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.669555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.669861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.685443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.685778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.702929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.703298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.718519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.718841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.734134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.734469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.744453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.744769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.760168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.760448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.776126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.776450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.005 [2024-11-29 16:45:53.793559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.005 [2024-11-29 16:45:53.793859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:53.809882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:53.809916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:53.827454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:53.827496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:53.843613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:53.843652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:53.860180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:53.860218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:53.876791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:53.876826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:53.894265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:53.894550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:53.910650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:53.910684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:53.927457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:53.927495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:53.944964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:53.945002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:53.960822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:53.960870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:53.971154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:53.971191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:53.987167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:53.987203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:54.004717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:54.004751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:54.021372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:54.021435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:54.039244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:54.039284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.265 [2024-11-29 16:45:54.055151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.265 [2024-11-29 16:45:54.055187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.533 [2024-11-29 16:45:54.071119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.533 [2024-11-29 16:45:54.071154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.533 [2024-11-29 16:45:54.089962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.533 [2024-11-29 16:45:54.090139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.533 [2024-11-29 16:45:54.105561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.533 [2024-11-29 16:45:54.105596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.533 11054.00 IOPS, 86.36 MiB/s [2024-11-29T16:45:54.325Z] [2024-11-29 16:45:54.121543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.533 [2024-11-29 16:45:54.121582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.533 [2024-11-29 16:45:54.138859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.533 [2024-11-29 16:45:54.138900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.533 [2024-11-29 16:45:54.154879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.534 [2024-11-29 16:45:54.154935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.534 [2024-11-29 16:45:54.172020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.534 [2024-11-29 16:45:54.172203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.534 [2024-11-29 16:45:54.187083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.534 [2024-11-29 16:45:54.187306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.534 [2024-11-29 16:45:54.202597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.534 [2024-11-29 16:45:54.202791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.534 [2024-11-29 16:45:54.218578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.534 [2024-11-29 16:45:54.218755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.534 [2024-11-29 16:45:54.235369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.534 [2024-11-29 16:45:54.235564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.534 [2024-11-29 16:45:54.252157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.534 [2024-11-29 16:45:54.252340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.534 [2024-11-29 16:45:54.269176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.534 [2024-11-29 16:45:54.269362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.534 [2024-11-29 16:45:54.284892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.534 [2024-11-29 16:45:54.285071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.534 [2024-11-29 16:45:54.294043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.534 [2024-11-29 16:45:54.294223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.534 [2024-11-29 16:45:54.309688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.534 [2024-11-29 16:45:54.309884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.326464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.326656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.343074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.343381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.360539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.360726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.376211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.376446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.387260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.387464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.402835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.403001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.418569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.418747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.436153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.436382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.450455] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.450641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.466322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.466384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.484460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.484494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.499264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.499299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.515434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.515475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.531667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.531716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.548045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.548078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.564026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.564061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.581235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.581268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-11-29 16:45:54.597321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-11-29 16:45:54.597387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.614211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.614431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.630068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.630253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.648869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.649052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.663207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.663286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.677487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.677522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.694368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.694397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.710808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.710841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.728000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.728184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.743964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.744146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.761842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.762011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.777861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.778035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.793646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.793813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.809437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.809620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.826927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.827109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.841851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.842024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.858464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.858654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.085 [2024-11-29 16:45:54.874043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.085 [2024-11-29 16:45:54.874298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:54.890556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:54.890855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:54.907436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:54.907710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:54.924201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:54.924468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:54.940739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:54.941021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:54.956233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:54.956533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:54.972001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:54.972260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:54.989975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:54.990078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:55.004510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:55.004624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:55.021575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:55.021793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:55.036411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:55.036628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:55.052880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:55.053125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:55.068242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:55.068551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:55.084612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:55.084886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:55.099892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:55.100147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 11646.50 IOPS, 90.99 MiB/s [2024-11-29T16:45:55.136Z] [2024-11-29 16:45:55.115121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:55.115408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.344 [2024-11-29 16:45:55.124608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.344 [2024-11-29 16:45:55.124882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.140543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.140769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.157175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.157456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.173696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.173993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.191607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.191851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.207188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.207478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.225464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.225702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.239998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.240253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.256305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.256572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.273134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.273426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.290124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.290400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.306593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.306850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.324405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.324717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.339282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.339614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.356046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.356302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.370909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.371170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.602 [2024-11-29 16:45:55.387288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.602 [2024-11-29 16:45:55.387494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.861 [2024-11-29 16:45:55.402233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.861 [2024-11-29 16:45:55.402548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.861 [2024-11-29 16:45:55.417839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.418072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.862 [2024-11-29 16:45:55.429161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.429480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.862 [2024-11-29 16:45:55.444211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.444475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.862 [2024-11-29 16:45:55.459022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.459262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.862 [2024-11-29 16:45:55.475489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.475821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.862 [2024-11-29 16:45:55.494348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.494640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.862 [2024-11-29 16:45:55.509695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.509938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.862 [2024-11-29 16:45:55.526160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.526428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.862 [2024-11-29 16:45:55.542322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.542640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.862 [2024-11-29 16:45:55.559753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.560021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.862 [2024-11-29 16:45:55.576042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.576333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.862 [2024-11-29 16:45:55.592822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.593071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.862 [2024-11-29 16:45:55.609743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.609993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.862 [2024-11-29 16:45:55.626806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.627058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.862 [2024-11-29 16:45:55.642838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.862 [2024-11-29 16:45:55.643017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.660457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.660637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.676111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.676296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.685942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.686168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.700811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.701052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.716832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.717013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.733419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.733589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.751098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.751302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.766231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.766265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.777511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.777548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.793561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.793595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.810082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.810120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.827461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.827501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.843837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.843870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.862067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.862101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.877954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.877988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.121 [2024-11-29 16:45:55.895148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.121 [2024-11-29 16:45:55.895374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:55.911598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:55.911668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:55.930465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:55.930502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:55.945273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:55.945307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:55.954660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:55.954710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:55.970156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:55.970242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:55.987617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:55.987681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:56.003826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:56.003860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:56.022035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:56.022070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:56.036508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:56.036733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:56.045887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:56.045921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:56.061815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:56.061850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:56.073120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:56.073155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:56.089958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:56.089991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:56.106511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:56.106544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 11804.33 IOPS, 92.22 MiB/s [2024-11-29T16:45:56.173Z] [2024-11-29 16:45:56.122303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:56.122430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:56.138899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:56.138935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.381 [2024-11-29 16:45:56.155292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.381 [2024-11-29 16:45:56.155346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.174164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.174201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.188090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.188281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.204993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.205027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.220443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.220479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.236046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.236082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.253436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.253469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.268584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.268617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.285124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.285157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.302414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.302447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.318700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.318748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.335982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.336017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.351861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.351897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.367886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.367921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.377077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.377110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.392556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.392589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.408211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.408245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.641 [2024-11-29 16:45:56.425169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.641 [2024-11-29 16:45:56.425204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.440379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.440444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.456428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.456462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.472636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.472669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.490735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.490767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.504835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.504868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.520864] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.520898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.536881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.536915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.555336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.555404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.569686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.569893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.586608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.586642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.603058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.603092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.620845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.621029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.635248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.635303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.650572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.650606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.661953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.661985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.901 [2024-11-29 16:45:56.677466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.901 [2024-11-29 16:45:56.677499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.694975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.695010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.709816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.709849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.724144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.724455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.741568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.741616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.757488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.757519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.774981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.775015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.789945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.789979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.806846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.806882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.821982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.822016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.837512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.837545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.847716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.847750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.862619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.862654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.879146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.879179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.896932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.896967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.911485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.161 [2024-11-29 16:45:56.911522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.161 [2024-11-29 16:45:56.927024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.162 [2024-11-29 16:45:56.927056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.162 [2024-11-29 16:45:56.944985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.162 [2024-11-29 16:45:56.945018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:56.959942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:56.959975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:56.975940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:56.975973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:56.993498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:56.993533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:57.009160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:57.009267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:57.025004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:57.025040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:57.043012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:57.043058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:57.058226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:57.058266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:57.074982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:57.075016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:57.091353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:57.091414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:57.108297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:57.108359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 11912.00 IOPS, 93.06 MiB/s [2024-11-29T16:45:57.214Z] [2024-11-29 16:45:57.124549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:57.124584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:57.141763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:57.141955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:57.158466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:57.158503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:57.174849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:57.174884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:57.190673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:57.190707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.422 [2024-11-29 16:45:57.208901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.422 [2024-11-29 16:45:57.209087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.681 [2024-11-29 16:45:57.223742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.681 [2024-11-29 16:45:57.223776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.238563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.238740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.254961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.254996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.271006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.271042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.288541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.288574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.304490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.304524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.321732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.321922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.336681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.336880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.346398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.346433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.361395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.361430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.376810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.376845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.387717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.387904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.404034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.404068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.421264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.421300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.437710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.437759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.453941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.682 [2024-11-29 16:45:57.453975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.682 [2024-11-29 16:45:57.470832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.941 [2024-11-29 16:45:57.471064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.941 [2024-11-29 16:45:57.487694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.941 [2024-11-29 16:45:57.487890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.941 [2024-11-29 16:45:57.505725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.941 [2024-11-29 16:45:57.505910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.941 [2024-11-29 16:45:57.521528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.941 [2024-11-29 16:45:57.521698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.941 [2024-11-29 16:45:57.539915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.941 [2024-11-29 16:45:57.540116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.941 [2024-11-29 16:45:57.553958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.941 [2024-11-29 16:45:57.554142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.941 [2024-11-29 16:45:57.569510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.941 [2024-11-29 16:45:57.569692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.941 [2024-11-29 16:45:57.578902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.941 [2024-11-29 16:45:57.579087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.941 [2024-11-29 16:45:57.594836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.941 [2024-11-29 16:45:57.595050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.941 [2024-11-29 16:45:57.612131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.941 [2024-11-29 16:45:57.612369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.941 [2024-11-29 16:45:57.628331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.942 [2024-11-29 16:45:57.628593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.942 [2024-11-29 16:45:57.644342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.942 [2024-11-29 16:45:57.644572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.942 [2024-11-29 16:45:57.654554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.942 [2024-11-29 16:45:57.654707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.942 [2024-11-29 16:45:57.670989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.942 [2024-11-29 16:45:57.671164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.942 [2024-11-29 16:45:57.687754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.942 [2024-11-29 16:45:57.687919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.942 [2024-11-29 16:45:57.703033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.942 [2024-11-29 16:45:57.703199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.942 [2024-11-29 16:45:57.720104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.942 [2024-11-29 16:45:57.720275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.200 [2024-11-29 16:45:57.736691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.200 [2024-11-29 16:45:57.736906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.200 [2024-11-29 16:45:57.752912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.200 [2024-11-29 16:45:57.753067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.200 [2024-11-29 16:45:57.770414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.200 [2024-11-29 16:45:57.770524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.200 [2024-11-29 16:45:57.786361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.200 [2024-11-29 16:45:57.786430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.200 [2024-11-29 16:45:57.803442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.200 [2024-11-29 16:45:57.803479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.200 [2024-11-29 16:45:57.819310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.200 [2024-11-29 16:45:57.819488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.200 [2024-11-29 16:45:57.835899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.200 [2024-11-29 16:45:57.835936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.200 [2024-11-29 16:45:57.852793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.200 [2024-11-29 16:45:57.852856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.200 [2024-11-29 16:45:57.870181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.200 [2024-11-29 16:45:57.870394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.200 [2024-11-29 16:45:57.885720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.200 [2024-11-29 16:45:57.885756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.200 [2024-11-29 16:45:57.904064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.200 [2024-11-29 16:45:57.904109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.200 [2024-11-29 16:45:57.920001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.200 [2024-11-29 16:45:57.920035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.200 [2024-11-29 16:45:57.936780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.200 [2024-11-29 16:45:57.936995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.200 [2024-11-29 16:45:57.953390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.200 [2024-11-29 16:45:57.953446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.201 [2024-11-29 16:45:57.969293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.201 [2024-11-29 16:45:57.969376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.201 [2024-11-29 16:45:57.988058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.201 [2024-11-29 16:45:57.988284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.003393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.003432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.018972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.019174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.035198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.035257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.047068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.047103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.063757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.063791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.079012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.079046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.088889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.089078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.104057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.104232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 11887.40 IOPS, 92.87 MiB/s [2024-11-29T16:45:58.253Z] [2024-11-29 16:45:58.119005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.119187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 00:10:34.461 Latency(us) 00:10:34.461 [2024-11-29T16:45:58.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.461 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:34.461 Nvme1n1 : 5.01 11896.26 92.94 0.00 0.00 10745.12 3902.37 21686.46 00:10:34.461 [2024-11-29T16:45:58.253Z] =================================================================================================================== 00:10:34.461 [2024-11-29T16:45:58.253Z] Total : 11896.26 92.94 0.00 0.00 10745.12 3902.37 21686.46 00:10:34.461 [2024-11-29 16:45:58.128475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.128526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.140453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.140504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.152504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.152598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.164518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.164592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.176520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.176580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.188498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.188562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.200513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.200570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.212508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.212563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.224514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.224567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.236520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.236574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.461 [2024-11-29 16:45:58.248606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.461 [2024-11-29 16:45:58.248676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.720 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (79301) - No such process 00:10:34.720 16:45:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 79301 00:10:34.720 16:45:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.720 16:45:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.720 16:45:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.720 16:45:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.720 16:45:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:34.720 16:45:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.720 16:45:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.720 delay0 00:10:34.720 16:45:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.720 16:45:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:34.720 16:45:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.720 16:45:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.720 16:45:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.720 16:45:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:34.720 [2024-11-29 16:45:58.469140] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:41.284 Initializing NVMe Controllers 00:10:41.284 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:41.284 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:41.284 Initialization complete. Launching workers. 00:10:41.284 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 107 00:10:41.284 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 394, failed to submit 33 00:10:41.284 success 288, unsuccessful 106, failed 0 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.284 rmmod nvme_tcp 00:10:41.284 rmmod nvme_fabrics 00:10:41.284 rmmod nvme_keyring 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 79160 ']' 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 79160 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 79160 ']' 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 79160 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79160 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:41.284 killing process with pid 79160 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79160' 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 79160 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 79160 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.284 16:46:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.284 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:41.284 00:10:41.284 real 0m23.857s 00:10:41.284 user 0m39.188s 00:10:41.284 sys 0m6.658s 00:10:41.284 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.284 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.284 ************************************ 00:10:41.284 END TEST nvmf_zcopy 00:10:41.284 ************************************ 00:10:41.284 16:46:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:41.284 16:46:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:41.284 16:46:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.284 16:46:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.284 ************************************ 00:10:41.284 START TEST nvmf_nmic 00:10:41.284 ************************************ 00:10:41.284 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:41.544 * Looking for test storage... 00:10:41.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:41.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.544 --rc genhtml_branch_coverage=1 00:10:41.544 --rc genhtml_function_coverage=1 00:10:41.544 --rc genhtml_legend=1 00:10:41.544 --rc geninfo_all_blocks=1 00:10:41.544 --rc geninfo_unexecuted_blocks=1 00:10:41.544 00:10:41.544 ' 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:41.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.544 --rc genhtml_branch_coverage=1 00:10:41.544 --rc genhtml_function_coverage=1 00:10:41.544 --rc genhtml_legend=1 00:10:41.544 --rc geninfo_all_blocks=1 00:10:41.544 --rc geninfo_unexecuted_blocks=1 00:10:41.544 00:10:41.544 ' 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:41.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.544 --rc genhtml_branch_coverage=1 00:10:41.544 --rc genhtml_function_coverage=1 00:10:41.544 --rc genhtml_legend=1 00:10:41.544 --rc geninfo_all_blocks=1 00:10:41.544 --rc geninfo_unexecuted_blocks=1 00:10:41.544 00:10:41.544 ' 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:41.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.544 --rc genhtml_branch_coverage=1 00:10:41.544 --rc genhtml_function_coverage=1 00:10:41.544 --rc genhtml_legend=1 00:10:41.544 --rc geninfo_all_blocks=1 00:10:41.544 --rc geninfo_unexecuted_blocks=1 00:10:41.544 00:10:41.544 ' 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.544 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.544 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:41.545 Cannot find device "nvmf_init_br" 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:41.545 Cannot find device "nvmf_init_br2" 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:41.545 Cannot find device "nvmf_tgt_br" 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:41.545 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.804 Cannot find device "nvmf_tgt_br2" 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:41.804 Cannot find device "nvmf_init_br" 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:41.804 Cannot find device "nvmf_init_br2" 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:41.804 Cannot find device "nvmf_tgt_br" 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:41.804 Cannot find device "nvmf_tgt_br2" 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:41.804 Cannot find device "nvmf_br" 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:41.804 Cannot find device "nvmf_init_if" 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:41.804 Cannot find device "nvmf_init_if2" 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:41.804 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:42.064 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:42.064 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:10:42.064 00:10:42.064 --- 10.0.0.3 ping statistics --- 00:10:42.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.064 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:42.064 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:42.064 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:10:42.064 00:10:42.064 --- 10.0.0.4 ping statistics --- 00:10:42.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.064 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:42.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:42.064 00:10:42.064 --- 10.0.0.1 ping statistics --- 00:10:42.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.064 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:42.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:10:42.064 00:10:42.064 --- 10.0.0.2 ping statistics --- 00:10:42.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.064 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=79678 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 79678 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 79678 ']' 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.064 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.064 [2024-11-29 16:46:05.720811] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:42.064 [2024-11-29 16:46:05.721510] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.065 [2024-11-29 16:46:05.853509] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:42.325 [2024-11-29 16:46:05.876553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.325 [2024-11-29 16:46:05.897525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.325 [2024-11-29 16:46:05.897594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.325 [2024-11-29 16:46:05.897621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.325 [2024-11-29 16:46:05.897629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.325 [2024-11-29 16:46:05.897636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.325 [2024-11-29 16:46:05.898471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.325 [2024-11-29 16:46:05.898516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.325 [2024-11-29 16:46:05.898628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.325 [2024-11-29 16:46:05.898632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.325 [2024-11-29 16:46:05.928962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:42.325 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.325 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:42.325 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:42.325 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.325 16:46:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.325 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.325 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.325 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.325 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.325 [2024-11-29 16:46:06.030855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.325 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.325 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:42.325 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.325 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.325 Malloc0 00:10:42.325 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.325 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:42.325 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.326 [2024-11-29 16:46:06.086301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.326 test case1: single bdev can't be used in multiple subsystems 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.326 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.326 [2024-11-29 16:46:06.110180] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:42.326 [2024-11-29 16:46:06.110219] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:42.326 [2024-11-29 16:46:06.110232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.326 request: 00:10:42.326 { 00:10:42.326 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:42.326 "namespace": { 00:10:42.326 "bdev_name": "Malloc0", 00:10:42.585 "no_auto_visible": false, 00:10:42.585 "hide_metadata": false 00:10:42.585 }, 00:10:42.585 "method": "nvmf_subsystem_add_ns", 00:10:42.585 "req_id": 1 00:10:42.585 } 00:10:42.585 Got JSON-RPC error response 00:10:42.585 response: 00:10:42.585 { 00:10:42.586 "code": -32602, 00:10:42.586 "message": "Invalid parameters" 00:10:42.586 } 00:10:42.586 Adding namespace failed - expected result. 00:10:42.586 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:42.586 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:42.586 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:42.586 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:42.586 test case2: host connect to nvmf target in multiple paths 00:10:42.586 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:42.586 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:42.586 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.586 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.586 [2024-11-29 16:46:06.126275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:42.586 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.586 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:42.586 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:42.845 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:42.845 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:42.845 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.845 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:42.845 16:46:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:44.756 16:46:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:44.756 16:46:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:44.756 16:46:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.756 16:46:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:44.756 16:46:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.756 16:46:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:44.756 16:46:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:44.756 [global] 00:10:44.756 thread=1 00:10:44.756 invalidate=1 00:10:44.756 rw=write 00:10:44.756 time_based=1 00:10:44.756 runtime=1 00:10:44.756 ioengine=libaio 00:10:44.756 direct=1 00:10:44.756 bs=4096 00:10:44.756 iodepth=1 00:10:44.756 norandommap=0 00:10:44.756 numjobs=1 00:10:44.756 00:10:44.756 verify_dump=1 00:10:44.756 verify_backlog=512 00:10:44.756 verify_state_save=0 00:10:44.756 do_verify=1 00:10:44.756 verify=crc32c-intel 00:10:44.756 [job0] 00:10:44.756 filename=/dev/nvme0n1 00:10:44.756 Could not set queue depth (nvme0n1) 00:10:45.016 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.016 fio-3.35 00:10:45.016 Starting 1 thread 00:10:45.953 00:10:45.953 job0: (groupid=0, jobs=1): err= 0: pid=79757: Fri Nov 29 16:46:09 2024 00:10:45.953 read: IOPS=2800, BW=10.9MiB/s (11.5MB/s)(10.9MiB/1001msec) 00:10:45.953 slat (nsec): min=11010, max=58592, avg=13823.14, stdev=3934.25 00:10:45.953 clat (usec): min=141, max=2140, avg=191.66, stdev=43.66 00:10:45.953 lat (usec): min=152, max=2153, avg=205.48, stdev=43.86 00:10:45.953 clat percentiles (usec): 00:10:45.953 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 174], 00:10:45.953 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 196], 00:10:45.953 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 229], 00:10:45.953 | 99.00th=[ 249], 99.50th=[ 260], 99.90th=[ 502], 99.95th=[ 635], 00:10:45.953 | 99.99th=[ 2147] 00:10:45.953 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:45.953 slat (nsec): min=16333, max=97366, avg=21660.31, stdev=6672.54 00:10:45.953 clat (usec): min=83, max=646, avg=113.35, stdev=19.39 00:10:45.953 lat (usec): min=101, max=670, avg=135.01, stdev=21.29 00:10:45.953 clat percentiles (usec): 00:10:45.953 | 1.00th=[ 89], 5.00th=[ 93], 10.00th=[ 96], 20.00th=[ 100], 00:10:45.953 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 110], 60.00th=[ 114], 00:10:45.953 | 70.00th=[ 120], 80.00th=[ 127], 90.00th=[ 137], 95.00th=[ 145], 00:10:45.953 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 219], 99.95th=[ 306], 00:10:45.953 | 99.99th=[ 644] 00:10:45.953 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:45.953 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:45.953 lat (usec) : 100=10.59%, 250=88.90%, 500=0.44%, 750=0.05% 00:10:45.953 lat (msec) : 4=0.02% 00:10:45.953 cpu : usr=2.10%, sys=8.30%, ctx=5875, majf=0, minf=5 00:10:45.953 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.953 issued rwts: total=2803,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.953 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.953 00:10:45.953 Run status group 0 (all jobs): 00:10:45.953 READ: bw=10.9MiB/s (11.5MB/s), 10.9MiB/s-10.9MiB/s (11.5MB/s-11.5MB/s), io=10.9MiB (11.5MB), run=1001-1001msec 00:10:45.953 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:45.953 00:10:45.953 Disk stats (read/write): 00:10:45.953 nvme0n1: ios=2610/2692, merge=0/0, ticks=536/343, in_queue=879, util=91.38% 00:10:45.953 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.213 rmmod nvme_tcp 00:10:46.213 rmmod nvme_fabrics 00:10:46.213 rmmod nvme_keyring 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:46.213 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:46.214 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 79678 ']' 00:10:46.214 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 79678 00:10:46.214 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 79678 ']' 00:10:46.214 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 79678 00:10:46.214 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:46.214 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.214 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79678 00:10:46.214 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.214 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.214 killing process with pid 79678 00:10:46.214 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79678' 00:10:46.214 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 79678 00:10:46.214 16:46:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 79678 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:46.473 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:46.732 00:10:46.732 real 0m5.269s 00:10:46.732 user 0m15.533s 00:10:46.732 sys 0m2.253s 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.732 ************************************ 00:10:46.732 END TEST nvmf_nmic 00:10:46.732 ************************************ 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:46.732 ************************************ 00:10:46.732 START TEST nvmf_fio_target 00:10:46.732 ************************************ 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:46.732 * Looking for test storage... 00:10:46.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:46.732 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:46.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.992 --rc genhtml_branch_coverage=1 00:10:46.992 --rc genhtml_function_coverage=1 00:10:46.992 --rc genhtml_legend=1 00:10:46.992 --rc geninfo_all_blocks=1 00:10:46.992 --rc geninfo_unexecuted_blocks=1 00:10:46.992 00:10:46.992 ' 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:46.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.992 --rc genhtml_branch_coverage=1 00:10:46.992 --rc genhtml_function_coverage=1 00:10:46.992 --rc genhtml_legend=1 00:10:46.992 --rc geninfo_all_blocks=1 00:10:46.992 --rc geninfo_unexecuted_blocks=1 00:10:46.992 00:10:46.992 ' 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:46.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.992 --rc genhtml_branch_coverage=1 00:10:46.992 --rc genhtml_function_coverage=1 00:10:46.992 --rc genhtml_legend=1 00:10:46.992 --rc geninfo_all_blocks=1 00:10:46.992 --rc geninfo_unexecuted_blocks=1 00:10:46.992 00:10:46.992 ' 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:46.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.992 --rc genhtml_branch_coverage=1 00:10:46.992 --rc genhtml_function_coverage=1 00:10:46.992 --rc genhtml_legend=1 00:10:46.992 --rc geninfo_all_blocks=1 00:10:46.992 --rc geninfo_unexecuted_blocks=1 00:10:46.992 00:10:46.992 ' 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:46.992 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.993 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:46.993 Cannot find device "nvmf_init_br" 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:46.993 Cannot find device "nvmf_init_br2" 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:46.993 Cannot find device "nvmf_tgt_br" 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.993 Cannot find device "nvmf_tgt_br2" 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:46.993 Cannot find device "nvmf_init_br" 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:46.993 Cannot find device "nvmf_init_br2" 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:46.993 Cannot find device "nvmf_tgt_br" 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:46.993 Cannot find device "nvmf_tgt_br2" 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:46.993 Cannot find device "nvmf_br" 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:46.993 Cannot find device "nvmf_init_if" 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:46.993 Cannot find device "nvmf_init_if2" 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:46.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:46.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:46.993 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:47.252 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:47.252 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:10:47.252 00:10:47.252 --- 10.0.0.3 ping statistics --- 00:10:47.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.252 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:47.252 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:47.252 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:10:47.252 00:10:47.252 --- 10.0.0.4 ping statistics --- 00:10:47.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.252 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:47.252 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:47.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:47.252 00:10:47.252 --- 10.0.0.1 ping statistics --- 00:10:47.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.253 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:47.253 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:47.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:10:47.253 00:10:47.253 --- 10.0.0.2 ping statistics --- 00:10:47.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.253 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:47.253 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.253 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:10:47.253 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.253 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.253 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.253 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.253 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.253 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.253 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.253 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:47.253 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.253 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.253 16:46:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.253 16:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=79991 00:10:47.253 16:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 79991 00:10:47.253 16:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.253 16:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 79991 ']' 00:10:47.253 16:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.253 16:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.253 16:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.253 16:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.253 16:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.512 [2024-11-29 16:46:11.066167] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:10:47.512 [2024-11-29 16:46:11.066366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.512 [2024-11-29 16:46:11.195218] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:47.512 [2024-11-29 16:46:11.224197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.512 [2024-11-29 16:46:11.249151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.512 [2024-11-29 16:46:11.249216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.512 [2024-11-29 16:46:11.249241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.512 [2024-11-29 16:46:11.249251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.512 [2024-11-29 16:46:11.249260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.512 [2024-11-29 16:46:11.250157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.512 [2024-11-29 16:46:11.250265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.512 [2024-11-29 16:46:11.250395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.512 [2024-11-29 16:46:11.250515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.512 [2024-11-29 16:46:11.284993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.447 16:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.447 16:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:48.447 16:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:48.447 16:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:48.447 16:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.447 16:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.447 16:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:48.706 [2024-11-29 16:46:12.350221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.706 16:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.964 16:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:48.964 16:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.223 16:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:49.223 16:46:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.481 16:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:49.481 16:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.739 16:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:49.739 16:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:49.997 16:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.256 16:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:50.256 16:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.514 16:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:50.514 16:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.773 16:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:50.773 16:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:51.031 16:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:51.289 16:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:51.289 16:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:51.546 16:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:51.546 16:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:51.803 16:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:52.061 [2024-11-29 16:46:15.829105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:52.062 16:46:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:52.320 16:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:52.578 16:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:52.837 16:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:52.837 16:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:52.837 16:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.837 16:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:52.837 16:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:52.837 16:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:54.741 16:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:54.741 16:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:54.741 16:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.741 16:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:54.741 16:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.741 16:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:54.741 16:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:54.741 [global] 00:10:54.741 thread=1 00:10:54.741 invalidate=1 00:10:54.741 rw=write 00:10:54.741 time_based=1 00:10:54.741 runtime=1 00:10:54.741 ioengine=libaio 00:10:54.741 direct=1 00:10:54.741 bs=4096 00:10:54.741 iodepth=1 00:10:54.741 norandommap=0 00:10:54.741 numjobs=1 00:10:54.741 00:10:54.741 verify_dump=1 00:10:54.741 verify_backlog=512 00:10:54.741 verify_state_save=0 00:10:54.741 do_verify=1 00:10:54.741 verify=crc32c-intel 00:10:54.741 [job0] 00:10:54.741 filename=/dev/nvme0n1 00:10:54.998 [job1] 00:10:54.998 filename=/dev/nvme0n2 00:10:54.998 [job2] 00:10:54.998 filename=/dev/nvme0n3 00:10:54.998 [job3] 00:10:54.998 filename=/dev/nvme0n4 00:10:54.998 Could not set queue depth (nvme0n1) 00:10:54.998 Could not set queue depth (nvme0n2) 00:10:54.998 Could not set queue depth (nvme0n3) 00:10:54.998 Could not set queue depth (nvme0n4) 00:10:54.998 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.998 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.998 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.998 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.998 fio-3.35 00:10:54.998 Starting 4 threads 00:10:56.376 00:10:56.376 job0: (groupid=0, jobs=1): err= 0: pid=80182: Fri Nov 29 16:46:19 2024 00:10:56.376 read: IOPS=2040, BW=8164KiB/s (8360kB/s)(8172KiB/1001msec) 00:10:56.376 slat (nsec): min=11363, max=81294, avg=14752.33, stdev=4913.87 00:10:56.376 clat (usec): min=141, max=1635, avg=277.68, stdev=72.02 00:10:56.376 lat (usec): min=154, max=1648, avg=292.43, stdev=73.95 00:10:56.376 clat percentiles (usec): 00:10:56.376 | 1.00th=[ 153], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 241], 00:10:56.376 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:10:56.376 | 70.00th=[ 277], 80.00th=[ 310], 90.00th=[ 355], 95.00th=[ 437], 00:10:56.376 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 553], 99.95th=[ 676], 00:10:56.376 | 99.99th=[ 1631] 00:10:56.376 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:56.376 slat (nsec): min=14498, max=72599, avg=20926.10, stdev=4032.14 00:10:56.376 clat (usec): min=92, max=2077, avg=172.22, stdev=58.86 00:10:56.376 lat (usec): min=110, max=2094, avg=193.14, stdev=59.20 00:10:56.376 clat percentiles (usec): 00:10:56.376 | 1.00th=[ 101], 5.00th=[ 111], 10.00th=[ 116], 20.00th=[ 127], 00:10:56.376 | 30.00th=[ 163], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 184], 00:10:56.376 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 219], 00:10:56.376 | 99.00th=[ 255], 99.50th=[ 314], 99.90th=[ 529], 99.95th=[ 791], 00:10:56.376 | 99.99th=[ 2073] 00:10:56.376 bw ( KiB/s): min= 8288, max= 8288, per=25.32%, avg=8288.00, stdev= 0.00, samples=1 00:10:56.376 iops : min= 2072, max= 2072, avg=2072.00, stdev= 0.00, samples=1 00:10:56.376 lat (usec) : 100=0.39%, 250=66.71%, 500=32.24%, 750=0.59%, 1000=0.02% 00:10:56.376 lat (msec) : 2=0.02%, 4=0.02% 00:10:56.376 cpu : usr=1.70%, sys=5.60%, ctx=4091, majf=0, minf=9 00:10:56.376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.376 issued rwts: total=2043,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.376 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.376 job1: (groupid=0, jobs=1): err= 0: pid=80183: Fri Nov 29 16:46:19 2024 00:10:56.376 read: IOPS=1848, BW=7393KiB/s (7570kB/s)(7400KiB/1001msec) 00:10:56.376 slat (nsec): min=11307, max=38878, avg=14603.51, stdev=2990.46 00:10:56.376 clat (usec): min=147, max=5664, avg=284.95, stdev=231.13 00:10:56.376 lat (usec): min=162, max=5679, avg=299.56, stdev=231.65 00:10:56.376 clat percentiles (usec): 00:10:56.376 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 243], 00:10:56.376 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 269], 00:10:56.376 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 334], 95.00th=[ 347], 00:10:56.376 | 99.00th=[ 457], 99.50th=[ 635], 99.90th=[ 4490], 99.95th=[ 5669], 00:10:56.376 | 99.99th=[ 5669] 00:10:56.376 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:56.376 slat (nsec): min=17081, max=80027, avg=24065.96, stdev=8298.81 00:10:56.376 clat (usec): min=95, max=590, avg=190.15, stdev=57.12 00:10:56.376 lat (usec): min=113, max=609, avg=214.22, stdev=61.30 00:10:56.376 clat percentiles (usec): 00:10:56.376 | 1.00th=[ 111], 5.00th=[ 119], 10.00th=[ 126], 20.00th=[ 141], 00:10:56.376 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 190], 00:10:56.376 | 70.00th=[ 198], 80.00th=[ 212], 90.00th=[ 277], 95.00th=[ 318], 00:10:56.376 | 99.00th=[ 363], 99.50th=[ 375], 99.90th=[ 429], 99.95th=[ 570], 00:10:56.376 | 99.99th=[ 594] 00:10:56.376 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:56.376 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:56.376 lat (usec) : 100=0.05%, 250=61.72%, 500=37.84%, 750=0.21% 00:10:56.376 lat (msec) : 2=0.03%, 4=0.08%, 10=0.08% 00:10:56.376 cpu : usr=1.30%, sys=6.40%, ctx=3899, majf=0, minf=13 00:10:56.376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.376 issued rwts: total=1850,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.376 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.376 job2: (groupid=0, jobs=1): err= 0: pid=80184: Fri Nov 29 16:46:19 2024 00:10:56.376 read: IOPS=1752, BW=7009KiB/s (7177kB/s)(7016KiB/1001msec) 00:10:56.376 slat (nsec): min=12038, max=62402, avg=17187.63, stdev=5096.88 00:10:56.376 clat (usec): min=150, max=565, avg=286.45, stdev=56.67 00:10:56.376 lat (usec): min=165, max=581, avg=303.64, stdev=58.72 00:10:56.376 clat percentiles (usec): 00:10:56.376 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 258], 00:10:56.376 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:10:56.376 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 330], 95.00th=[ 457], 00:10:56.376 | 99.00th=[ 510], 99.50th=[ 523], 99.90th=[ 545], 99.95th=[ 570], 00:10:56.376 | 99.99th=[ 570] 00:10:56.376 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:56.376 slat (nsec): min=18272, max=67080, avg=25068.39, stdev=6274.40 00:10:56.376 clat (usec): min=104, max=307, avg=199.53, stdev=35.78 00:10:56.376 lat (usec): min=123, max=332, avg=224.60, stdev=37.61 00:10:56.376 clat percentiles (usec): 00:10:56.376 | 1.00th=[ 112], 5.00th=[ 124], 10.00th=[ 137], 20.00th=[ 184], 00:10:56.376 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 210], 00:10:56.376 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 253], 00:10:56.376 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 289], 99.95th=[ 293], 00:10:56.376 | 99.99th=[ 306] 00:10:56.376 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:56.376 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:56.376 lat (usec) : 250=55.60%, 500=43.61%, 750=0.79% 00:10:56.376 cpu : usr=2.10%, sys=6.00%, ctx=3803, majf=0, minf=8 00:10:56.376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.376 issued rwts: total=1754,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.376 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.376 job3: (groupid=0, jobs=1): err= 0: pid=80185: Fri Nov 29 16:46:19 2024 00:10:56.376 read: IOPS=1645, BW=6581KiB/s (6739kB/s)(6588KiB/1001msec) 00:10:56.376 slat (nsec): min=11843, max=49354, avg=15411.64, stdev=4703.20 00:10:56.376 clat (usec): min=183, max=1869, avg=283.86, stdev=58.93 00:10:56.376 lat (usec): min=196, max=1884, avg=299.28, stdev=60.62 00:10:56.376 clat percentiles (usec): 00:10:56.376 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 260], 00:10:56.376 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:10:56.376 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 371], 00:10:56.376 | 99.00th=[ 506], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 1876], 00:10:56.376 | 99.99th=[ 1876] 00:10:56.376 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:56.376 slat (nsec): min=17530, max=99581, avg=27177.78, stdev=9662.58 00:10:56.376 clat (usec): min=109, max=584, avg=217.02, stdev=56.20 00:10:56.376 lat (usec): min=134, max=614, avg=244.20, stdev=61.14 00:10:56.376 clat percentiles (usec): 00:10:56.376 | 1.00th=[ 120], 5.00th=[ 131], 10.00th=[ 145], 20.00th=[ 188], 00:10:56.376 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:10:56.376 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 289], 95.00th=[ 347], 00:10:56.376 | 99.00th=[ 392], 99.50th=[ 416], 99.90th=[ 474], 99.95th=[ 570], 00:10:56.376 | 99.99th=[ 586] 00:10:56.376 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:56.376 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:56.376 lat (usec) : 250=49.66%, 500=49.80%, 750=0.51% 00:10:56.376 lat (msec) : 2=0.03% 00:10:56.376 cpu : usr=1.60%, sys=6.50%, ctx=3695, majf=0, minf=9 00:10:56.376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.376 issued rwts: total=1647,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.377 00:10:56.377 Run status group 0 (all jobs): 00:10:56.377 READ: bw=28.5MiB/s (29.8MB/s), 6581KiB/s-8164KiB/s (6739kB/s-8360kB/s), io=28.5MiB (29.9MB), run=1001-1001msec 00:10:56.377 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:10:56.377 00:10:56.377 Disk stats (read/write): 00:10:56.377 nvme0n1: ios=1609/2048, merge=0/0, ticks=463/369, in_queue=832, util=87.17% 00:10:56.377 nvme0n2: ios=1583/1765, merge=0/0, ticks=459/355, in_queue=814, util=87.84% 00:10:56.377 nvme0n3: ios=1536/1729, merge=0/0, ticks=433/366, in_queue=799, util=89.10% 00:10:56.377 nvme0n4: ios=1536/1610, merge=0/0, ticks=437/360, in_queue=797, util=89.66% 00:10:56.377 16:46:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:56.377 [global] 00:10:56.377 thread=1 00:10:56.377 invalidate=1 00:10:56.377 rw=randwrite 00:10:56.377 time_based=1 00:10:56.377 runtime=1 00:10:56.377 ioengine=libaio 00:10:56.377 direct=1 00:10:56.377 bs=4096 00:10:56.377 iodepth=1 00:10:56.377 norandommap=0 00:10:56.377 numjobs=1 00:10:56.377 00:10:56.377 verify_dump=1 00:10:56.377 verify_backlog=512 00:10:56.377 verify_state_save=0 00:10:56.377 do_verify=1 00:10:56.377 verify=crc32c-intel 00:10:56.377 [job0] 00:10:56.377 filename=/dev/nvme0n1 00:10:56.377 [job1] 00:10:56.377 filename=/dev/nvme0n2 00:10:56.377 [job2] 00:10:56.377 filename=/dev/nvme0n3 00:10:56.377 [job3] 00:10:56.377 filename=/dev/nvme0n4 00:10:56.377 Could not set queue depth (nvme0n1) 00:10:56.377 Could not set queue depth (nvme0n2) 00:10:56.377 Could not set queue depth (nvme0n3) 00:10:56.377 Could not set queue depth (nvme0n4) 00:10:56.377 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.377 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.377 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.377 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.377 fio-3.35 00:10:56.377 Starting 4 threads 00:10:57.753 00:10:57.753 job0: (groupid=0, jobs=1): err= 0: pid=80238: Fri Nov 29 16:46:21 2024 00:10:57.753 read: IOPS=2435, BW=9740KiB/s (9974kB/s)(9740KiB/1000msec) 00:10:57.753 slat (nsec): min=8157, max=41169, avg=12650.87, stdev=3435.63 00:10:57.753 clat (usec): min=130, max=431, avg=216.10, stdev=60.71 00:10:57.753 lat (usec): min=142, max=445, avg=228.75, stdev=60.78 00:10:57.753 clat percentiles (usec): 00:10:57.753 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:10:57.753 | 30.00th=[ 167], 40.00th=[ 176], 50.00th=[ 194], 60.00th=[ 233], 00:10:57.753 | 70.00th=[ 247], 80.00th=[ 265], 90.00th=[ 314], 95.00th=[ 338], 00:10:57.754 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 400], 99.95th=[ 416], 00:10:57.754 | 99.99th=[ 433] 00:10:57.754 write: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec); 0 zone resets 00:10:57.754 slat (nsec): min=11131, max=74519, avg=19382.39, stdev=4654.58 00:10:57.754 clat (usec): min=95, max=1864, avg=150.35, stdev=51.34 00:10:57.754 lat (usec): min=116, max=1883, avg=169.74, stdev=51.06 00:10:57.754 clat percentiles (usec): 00:10:57.754 | 1.00th=[ 105], 5.00th=[ 113], 10.00th=[ 118], 20.00th=[ 123], 00:10:57.754 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 137], 60.00th=[ 143], 00:10:57.754 | 70.00th=[ 157], 80.00th=[ 176], 90.00th=[ 212], 95.00th=[ 231], 00:10:57.754 | 99.00th=[ 258], 99.50th=[ 273], 99.90th=[ 318], 99.95th=[ 750], 00:10:57.754 | 99.99th=[ 1860] 00:10:57.754 bw ( KiB/s): min=12288, max=12288, per=27.62%, avg=12288.00, stdev= 0.00, samples=1 00:10:57.754 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:57.754 lat (usec) : 100=0.12%, 250=85.79%, 500=14.05%, 750=0.02% 00:10:57.754 lat (msec) : 2=0.02% 00:10:57.754 cpu : usr=1.30%, sys=7.30%, ctx=4996, majf=0, minf=15 00:10:57.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.754 issued rwts: total=2435,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.754 job1: (groupid=0, jobs=1): err= 0: pid=80239: Fri Nov 29 16:46:21 2024 00:10:57.754 read: IOPS=2162, BW=8651KiB/s (8859kB/s)(8660KiB/1001msec) 00:10:57.754 slat (usec): min=8, max=130, avg=15.51, stdev= 5.87 00:10:57.754 clat (usec): min=38, max=410, avg=225.08, stdev=57.68 00:10:57.754 lat (usec): min=155, max=423, avg=240.59, stdev=56.12 00:10:57.754 clat percentiles (usec): 00:10:57.754 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:10:57.754 | 30.00th=[ 178], 40.00th=[ 190], 50.00th=[ 223], 60.00th=[ 237], 00:10:57.754 | 70.00th=[ 249], 80.00th=[ 269], 90.00th=[ 318], 95.00th=[ 338], 00:10:57.754 | 99.00th=[ 367], 99.50th=[ 375], 99.90th=[ 388], 99.95th=[ 404], 00:10:57.754 | 99.99th=[ 412] 00:10:57.754 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:57.754 slat (usec): min=11, max=231, avg=22.44, stdev=10.50 00:10:57.754 clat (usec): min=4, max=6182, avg=161.16, stdev=165.31 00:10:57.754 lat (usec): min=116, max=6203, avg=183.60, stdev=165.18 00:10:57.754 clat percentiles (usec): 00:10:57.754 | 1.00th=[ 110], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 127], 00:10:57.754 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 149], 00:10:57.754 | 70.00th=[ 163], 80.00th=[ 182], 90.00th=[ 217], 95.00th=[ 231], 00:10:57.754 | 99.00th=[ 273], 99.50th=[ 302], 99.90th=[ 3195], 99.95th=[ 3294], 00:10:57.754 | 99.99th=[ 6194] 00:10:57.754 bw ( KiB/s): min=12288, max=12288, per=27.62%, avg=12288.00, stdev= 0.00, samples=1 00:10:57.754 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:57.754 lat (usec) : 10=0.02%, 50=0.04%, 100=0.06%, 250=85.48%, 500=14.24% 00:10:57.754 lat (usec) : 750=0.02%, 1000=0.02% 00:10:57.754 lat (msec) : 2=0.02%, 4=0.06%, 10=0.02% 00:10:57.754 cpu : usr=2.40%, sys=7.10%, ctx=4743, majf=0, minf=7 00:10:57.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.754 issued rwts: total=2165,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.754 job2: (groupid=0, jobs=1): err= 0: pid=80240: Fri Nov 29 16:46:21 2024 00:10:57.754 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:57.754 slat (nsec): min=11523, max=45076, avg=14124.80, stdev=3323.48 00:10:57.754 clat (usec): min=150, max=578, avg=183.62, stdev=18.84 00:10:57.754 lat (usec): min=163, max=604, avg=197.74, stdev=19.42 00:10:57.754 clat percentiles (usec): 00:10:57.754 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:10:57.754 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:10:57.754 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 217], 00:10:57.754 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 269], 99.95th=[ 306], 00:10:57.754 | 99.99th=[ 578] 00:10:57.754 write: IOPS=2963, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec); 0 zone resets 00:10:57.754 slat (nsec): min=14460, max=97656, avg=21365.40, stdev=5031.95 00:10:57.754 clat (usec): min=102, max=2570, avg=141.99, stdev=56.56 00:10:57.754 lat (usec): min=120, max=2591, avg=163.35, stdev=56.95 00:10:57.754 clat percentiles (usec): 00:10:57.754 | 1.00th=[ 114], 5.00th=[ 120], 10.00th=[ 124], 20.00th=[ 128], 00:10:57.754 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 143], 00:10:57.754 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 161], 95.00th=[ 169], 00:10:57.754 | 99.00th=[ 194], 99.50th=[ 210], 99.90th=[ 676], 99.95th=[ 1663], 00:10:57.754 | 99.99th=[ 2573] 00:10:57.754 bw ( KiB/s): min=12288, max=12288, per=27.62%, avg=12288.00, stdev= 0.00, samples=1 00:10:57.754 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:57.754 lat (usec) : 250=99.66%, 500=0.27%, 750=0.04% 00:10:57.754 lat (msec) : 2=0.02%, 4=0.02% 00:10:57.754 cpu : usr=2.00%, sys=8.00%, ctx=5526, majf=0, minf=15 00:10:57.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.754 issued rwts: total=2560,2966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.754 job3: (groupid=0, jobs=1): err= 0: pid=80241: Fri Nov 29 16:46:21 2024 00:10:57.754 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:57.754 slat (nsec): min=10623, max=48243, avg=14018.24, stdev=4131.53 00:10:57.754 clat (usec): min=144, max=258, avg=179.31, stdev=16.40 00:10:57.754 lat (usec): min=156, max=270, avg=193.33, stdev=16.79 00:10:57.754 clat percentiles (usec): 00:10:57.754 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:10:57.754 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:10:57.754 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 212], 00:10:57.754 | 99.00th=[ 229], 99.50th=[ 237], 99.90th=[ 247], 99.95th=[ 249], 00:10:57.754 | 99.99th=[ 260] 00:10:57.754 write: IOPS=3045, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec); 0 zone resets 00:10:57.754 slat (nsec): min=13773, max=88914, avg=21439.70, stdev=6144.06 00:10:57.754 clat (usec): min=106, max=1086, avg=140.81, stdev=28.13 00:10:57.754 lat (usec): min=126, max=1108, avg=162.25, stdev=28.81 00:10:57.754 clat percentiles (usec): 00:10:57.754 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 125], 20.00th=[ 128], 00:10:57.754 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:10:57.754 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 161], 95.00th=[ 169], 00:10:57.754 | 99.00th=[ 188], 99.50th=[ 198], 99.90th=[ 482], 99.95th=[ 709], 00:10:57.754 | 99.99th=[ 1090] 00:10:57.754 bw ( KiB/s): min=12288, max=12288, per=27.62%, avg=12288.00, stdev= 0.00, samples=1 00:10:57.754 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:57.754 lat (usec) : 250=99.84%, 500=0.11%, 750=0.04% 00:10:57.754 lat (msec) : 2=0.02% 00:10:57.754 cpu : usr=2.00%, sys=8.40%, ctx=5620, majf=0, minf=11 00:10:57.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.754 issued rwts: total=2560,3049,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.754 00:10:57.754 Run status group 0 (all jobs): 00:10:57.754 READ: bw=37.9MiB/s (39.8MB/s), 8651KiB/s-9.99MiB/s (8859kB/s-10.5MB/s), io=38.0MiB (39.8MB), run=1000-1001msec 00:10:57.754 WRITE: bw=43.5MiB/s (45.6MB/s), 9.99MiB/s-11.9MiB/s (10.5MB/s-12.5MB/s), io=43.5MiB (45.6MB), run=1000-1001msec 00:10:57.754 00:10:57.754 Disk stats (read/write): 00:10:57.754 nvme0n1: ios=2098/2440, merge=0/0, ticks=421/369, in_queue=790, util=87.58% 00:10:57.754 nvme0n2: ios=2078/2048, merge=0/0, ticks=479/317, in_queue=796, util=87.68% 00:10:57.754 nvme0n3: ios=2191/2560, merge=0/0, ticks=406/386, in_queue=792, util=89.32% 00:10:57.754 nvme0n4: ios=2247/2560, merge=0/0, ticks=409/384, in_queue=793, util=89.78% 00:10:57.754 16:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:57.754 [global] 00:10:57.754 thread=1 00:10:57.754 invalidate=1 00:10:57.754 rw=write 00:10:57.754 time_based=1 00:10:57.754 runtime=1 00:10:57.754 ioengine=libaio 00:10:57.755 direct=1 00:10:57.755 bs=4096 00:10:57.755 iodepth=128 00:10:57.755 norandommap=0 00:10:57.755 numjobs=1 00:10:57.755 00:10:57.755 verify_dump=1 00:10:57.755 verify_backlog=512 00:10:57.755 verify_state_save=0 00:10:57.755 do_verify=1 00:10:57.755 verify=crc32c-intel 00:10:57.755 [job0] 00:10:57.755 filename=/dev/nvme0n1 00:10:57.755 [job1] 00:10:57.755 filename=/dev/nvme0n2 00:10:57.755 [job2] 00:10:57.755 filename=/dev/nvme0n3 00:10:57.755 [job3] 00:10:57.755 filename=/dev/nvme0n4 00:10:57.755 Could not set queue depth (nvme0n1) 00:10:57.755 Could not set queue depth (nvme0n2) 00:10:57.755 Could not set queue depth (nvme0n3) 00:10:57.755 Could not set queue depth (nvme0n4) 00:10:57.755 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.755 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.755 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.755 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.755 fio-3.35 00:10:57.755 Starting 4 threads 00:10:59.133 00:10:59.133 job0: (groupid=0, jobs=1): err= 0: pid=80302: Fri Nov 29 16:46:22 2024 00:10:59.133 read: IOPS=2771, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1005msec) 00:10:59.133 slat (usec): min=7, max=7024, avg=171.34, stdev=872.88 00:10:59.133 clat (usec): min=588, max=24548, avg=21861.33, stdev=2339.74 00:10:59.133 lat (usec): min=5808, max=24581, avg=22032.67, stdev=2169.39 00:10:59.133 clat percentiles (usec): 00:10:59.133 | 1.00th=[ 6390], 5.00th=[17433], 10.00th=[21627], 20.00th=[21890], 00:10:59.133 | 30.00th=[22152], 40.00th=[22152], 50.00th=[22152], 60.00th=[22414], 00:10:59.133 | 70.00th=[22676], 80.00th=[22676], 90.00th=[22938], 95.00th=[23200], 00:10:59.133 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:10:59.133 | 99.99th=[24511] 00:10:59.133 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:10:59.133 slat (usec): min=10, max=5326, avg=164.39, stdev=797.87 00:10:59.133 clat (usec): min=15943, max=23822, avg=21359.95, stdev=1022.07 00:10:59.133 lat (usec): min=17075, max=23838, avg=21524.34, stdev=643.84 00:10:59.133 clat percentiles (usec): 00:10:59.133 | 1.00th=[16712], 5.00th=[20579], 10.00th=[20579], 20.00th=[20841], 00:10:59.133 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21365], 60.00th=[21627], 00:10:59.133 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22152], 95.00th=[22414], 00:10:59.133 | 99.00th=[23725], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:10:59.133 | 99.99th=[23725] 00:10:59.133 bw ( KiB/s): min=12288, max=12288, per=25.50%, avg=12288.00, stdev= 0.00, samples=2 00:10:59.133 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:59.133 lat (usec) : 750=0.02% 00:10:59.133 lat (msec) : 10=0.55%, 20=4.18%, 50=95.25% 00:10:59.133 cpu : usr=4.18%, sys=7.47%, ctx=184, majf=0, minf=8 00:10:59.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:59.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.133 issued rwts: total=2785,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.133 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.133 job1: (groupid=0, jobs=1): err= 0: pid=80303: Fri Nov 29 16:46:22 2024 00:10:59.133 read: IOPS=2874, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1007msec) 00:10:59.133 slat (usec): min=4, max=9854, avg=188.66, stdev=737.99 00:10:59.133 clat (usec): min=2770, max=33704, avg=23749.80, stdev=3561.91 00:10:59.133 lat (usec): min=7910, max=33725, avg=23938.46, stdev=3569.04 00:10:59.134 clat percentiles (usec): 00:10:59.134 | 1.00th=[11076], 5.00th=[18220], 10.00th=[19530], 20.00th=[21365], 00:10:59.134 | 30.00th=[22414], 40.00th=[23200], 50.00th=[23462], 60.00th=[24511], 00:10:59.134 | 70.00th=[25560], 80.00th=[26608], 90.00th=[28181], 95.00th=[29230], 00:10:59.134 | 99.00th=[30802], 99.50th=[32375], 99.90th=[32900], 99.95th=[32900], 00:10:59.134 | 99.99th=[33817] 00:10:59.134 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:10:59.134 slat (usec): min=8, max=7278, avg=141.86, stdev=573.52 00:10:59.134 clat (usec): min=11354, max=31815, avg=18970.70, stdev=3424.75 00:10:59.134 lat (usec): min=11374, max=31834, avg=19112.55, stdev=3445.55 00:10:59.134 clat percentiles (usec): 00:10:59.134 | 1.00th=[12518], 5.00th=[14353], 10.00th=[15401], 20.00th=[16057], 00:10:59.134 | 30.00th=[16581], 40.00th=[17433], 50.00th=[18220], 60.00th=[19268], 00:10:59.134 | 70.00th=[20579], 80.00th=[22152], 90.00th=[23725], 95.00th=[25560], 00:10:59.134 | 99.00th=[27919], 99.50th=[28181], 99.90th=[31851], 99.95th=[31851], 00:10:59.134 | 99.99th=[31851] 00:10:59.134 bw ( KiB/s): min=12288, max=12312, per=25.53%, avg=12300.00, stdev=16.97, samples=2 00:10:59.134 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:10:59.134 lat (msec) : 4=0.02%, 10=0.18%, 20=39.72%, 50=60.08% 00:10:59.134 cpu : usr=2.49%, sys=8.55%, ctx=762, majf=0, minf=7 00:10:59.134 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:59.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.134 issued rwts: total=2895,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.134 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.134 job2: (groupid=0, jobs=1): err= 0: pid=80304: Fri Nov 29 16:46:22 2024 00:10:59.134 read: IOPS=2773, BW=10.8MiB/s (11.4MB/s)(10.9MiB/1004msec) 00:10:59.134 slat (usec): min=7, max=5839, avg=171.04, stdev=870.59 00:10:59.134 clat (usec): min=785, max=23741, avg=21905.85, stdev=2407.69 00:10:59.134 lat (usec): min=5306, max=23753, avg=22076.89, stdev=2242.84 00:10:59.134 clat percentiles (usec): 00:10:59.134 | 1.00th=[ 5800], 5.00th=[17433], 10.00th=[21627], 20.00th=[21890], 00:10:59.134 | 30.00th=[22152], 40.00th=[22152], 50.00th=[22414], 60.00th=[22414], 00:10:59.134 | 70.00th=[22676], 80.00th=[22938], 90.00th=[22938], 95.00th=[23462], 00:10:59.134 | 99.00th=[23462], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:10:59.134 | 99.99th=[23725] 00:10:59.134 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:10:59.134 slat (usec): min=10, max=6041, avg=164.81, stdev=808.86 00:10:59.134 clat (usec): min=15627, max=23710, avg=21287.66, stdev=1057.06 00:10:59.134 lat (usec): min=15810, max=23756, avg=21452.47, stdev=694.54 00:10:59.134 clat percentiles (usec): 00:10:59.134 | 1.00th=[16450], 5.00th=[20317], 10.00th=[20579], 20.00th=[20841], 00:10:59.134 | 30.00th=[21103], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:10:59.134 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22152], 95.00th=[22414], 00:10:59.134 | 99.00th=[23725], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:10:59.134 | 99.99th=[23725] 00:10:59.134 bw ( KiB/s): min=12288, max=12288, per=25.50%, avg=12288.00, stdev= 0.00, samples=2 00:10:59.134 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:59.134 lat (usec) : 1000=0.02% 00:10:59.134 lat (msec) : 10=0.55%, 20=4.70%, 50=94.74% 00:10:59.134 cpu : usr=2.99%, sys=6.78%, ctx=184, majf=0, minf=13 00:10:59.134 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:59.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.134 issued rwts: total=2785,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.134 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.134 job3: (groupid=0, jobs=1): err= 0: pid=80305: Fri Nov 29 16:46:22 2024 00:10:59.134 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:59.134 slat (usec): min=4, max=7070, avg=194.07, stdev=734.90 00:10:59.134 clat (usec): min=16838, max=36197, avg=24733.24, stdev=3180.39 00:10:59.134 lat (usec): min=16853, max=36207, avg=24927.31, stdev=3181.87 00:10:59.134 clat percentiles (usec): 00:10:59.134 | 1.00th=[17433], 5.00th=[19530], 10.00th=[21365], 20.00th=[22414], 00:10:59.134 | 30.00th=[23200], 40.00th=[23725], 50.00th=[24249], 60.00th=[25035], 00:10:59.134 | 70.00th=[25822], 80.00th=[27395], 90.00th=[28705], 95.00th=[30802], 00:10:59.134 | 99.00th=[32900], 99.50th=[33817], 99.90th=[36439], 99.95th=[36439], 00:10:59.134 | 99.99th=[36439] 00:10:59.134 write: IOPS=2902, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1004msec); 0 zone resets 00:10:59.134 slat (usec): min=8, max=6584, avg=165.61, stdev=613.24 00:10:59.134 clat (usec): min=2716, max=31866, avg=21558.56, stdev=4112.56 00:10:59.134 lat (usec): min=4772, max=32079, avg=21724.16, stdev=4115.07 00:10:59.134 clat percentiles (usec): 00:10:59.134 | 1.00th=[ 8586], 5.00th=[16057], 10.00th=[16909], 20.00th=[18220], 00:10:59.134 | 30.00th=[19530], 40.00th=[20841], 50.00th=[21627], 60.00th=[22676], 00:10:59.134 | 70.00th=[23200], 80.00th=[24249], 90.00th=[26608], 95.00th=[29492], 00:10:59.134 | 99.00th=[31851], 99.50th=[31851], 99.90th=[31851], 99.95th=[31851], 00:10:59.134 | 99.99th=[31851] 00:10:59.134 bw ( KiB/s): min=10000, max=12312, per=23.15%, avg=11156.00, stdev=1634.83, samples=2 00:10:59.134 iops : min= 2500, max= 3078, avg=2789.00, stdev=408.71, samples=2 00:10:59.134 lat (msec) : 4=0.02%, 10=0.55%, 20=19.42%, 50=80.01% 00:10:59.134 cpu : usr=2.39%, sys=7.88%, ctx=819, majf=0, minf=19 00:10:59.134 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:59.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:59.134 issued rwts: total=2560,2914,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.134 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:59.134 00:10:59.134 Run status group 0 (all jobs): 00:10:59.134 READ: bw=42.8MiB/s (44.8MB/s), 9.96MiB/s-11.2MiB/s (10.4MB/s-11.8MB/s), io=43.1MiB (45.2MB), run=1004-1007msec 00:10:59.134 WRITE: bw=47.1MiB/s (49.3MB/s), 11.3MiB/s-12.0MiB/s (11.9MB/s-12.5MB/s), io=47.4MiB (49.7MB), run=1004-1007msec 00:10:59.134 00:10:59.134 Disk stats (read/write): 00:10:59.134 nvme0n1: ios=2514/2560, merge=0/0, ticks=12723/12548, in_queue=25271, util=88.37% 00:10:59.134 nvme0n2: ios=2557/2560, merge=0/0, ticks=19662/14429, in_queue=34091, util=88.88% 00:10:59.134 nvme0n3: ios=2481/2560, merge=0/0, ticks=11496/10533, in_queue=22029, util=89.51% 00:10:59.134 nvme0n4: ios=2141/2560, merge=0/0, ticks=16479/17128, in_queue=33607, util=89.34% 00:10:59.135 16:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:59.135 [global] 00:10:59.135 thread=1 00:10:59.135 invalidate=1 00:10:59.135 rw=randwrite 00:10:59.135 time_based=1 00:10:59.135 runtime=1 00:10:59.135 ioengine=libaio 00:10:59.135 direct=1 00:10:59.135 bs=4096 00:10:59.135 iodepth=128 00:10:59.135 norandommap=0 00:10:59.135 numjobs=1 00:10:59.135 00:10:59.135 verify_dump=1 00:10:59.135 verify_backlog=512 00:10:59.135 verify_state_save=0 00:10:59.135 do_verify=1 00:10:59.135 verify=crc32c-intel 00:10:59.135 [job0] 00:10:59.135 filename=/dev/nvme0n1 00:10:59.135 [job1] 00:10:59.135 filename=/dev/nvme0n2 00:10:59.135 [job2] 00:10:59.135 filename=/dev/nvme0n3 00:10:59.135 [job3] 00:10:59.135 filename=/dev/nvme0n4 00:10:59.135 Could not set queue depth (nvme0n1) 00:10:59.135 Could not set queue depth (nvme0n2) 00:10:59.135 Could not set queue depth (nvme0n3) 00:10:59.135 Could not set queue depth (nvme0n4) 00:10:59.135 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:59.135 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:59.135 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:59.135 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:59.135 fio-3.35 00:10:59.135 Starting 4 threads 00:11:00.512 00:11:00.512 job0: (groupid=0, jobs=1): err= 0: pid=80363: Fri Nov 29 16:46:24 2024 00:11:00.512 read: IOPS=2657, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1011msec) 00:11:00.512 slat (usec): min=6, max=17341, avg=180.46, stdev=1366.14 00:11:00.512 clat (usec): min=254, max=39812, avg=23641.58, stdev=2985.72 00:11:00.512 lat (usec): min=11364, max=45209, avg=23822.04, stdev=3184.95 00:11:00.512 clat percentiles (usec): 00:11:00.512 | 1.00th=[11863], 5.00th=[19268], 10.00th=[21890], 20.00th=[22414], 00:11:00.512 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23987], 00:11:00.512 | 70.00th=[24249], 80.00th=[24773], 90.00th=[27395], 95.00th=[27657], 00:11:00.512 | 99.00th=[30802], 99.50th=[35914], 99.90th=[37487], 99.95th=[37487], 00:11:00.512 | 99.99th=[39584] 00:11:00.512 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:11:00.512 slat (usec): min=15, max=13872, avg=163.10, stdev=1092.64 00:11:00.512 clat (usec): min=7853, max=29537, avg=21007.26, stdev=2983.72 00:11:00.512 lat (usec): min=10207, max=29583, avg=21170.36, stdev=2813.22 00:11:00.512 clat percentiles (usec): 00:11:00.512 | 1.00th=[10945], 5.00th=[16057], 10.00th=[16712], 20.00th=[20317], 00:11:00.512 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21365], 60.00th=[21627], 00:11:00.512 | 70.00th=[22152], 80.00th=[22676], 90.00th=[23725], 95.00th=[25035], 00:11:00.512 | 99.00th=[26608], 99.50th=[26608], 99.90th=[27395], 99.95th=[27395], 00:11:00.512 | 99.99th=[29492] 00:11:00.512 bw ( KiB/s): min=12280, max=12312, per=19.52%, avg=12296.00, stdev=22.63, samples=2 00:11:00.512 iops : min= 3070, max= 3078, avg=3074.00, stdev= 5.66, samples=2 00:11:00.512 lat (usec) : 500=0.02% 00:11:00.512 lat (msec) : 10=0.02%, 20=13.11%, 50=86.86% 00:11:00.512 cpu : usr=1.98%, sys=9.11%, ctx=120, majf=0, minf=15 00:11:00.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:00.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.512 issued rwts: total=2687,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.512 job1: (groupid=0, jobs=1): err= 0: pid=80364: Fri Nov 29 16:46:24 2024 00:11:00.512 read: IOPS=2599, BW=10.2MiB/s (10.6MB/s)(10.2MiB/1009msec) 00:11:00.512 slat (usec): min=7, max=12287, avg=162.06, stdev=1041.92 00:11:00.512 clat (usec): min=1542, max=42824, avg=23066.55, stdev=3758.99 00:11:00.512 lat (usec): min=9729, max=48158, avg=23228.61, stdev=3733.18 00:11:00.512 clat percentiles (usec): 00:11:00.512 | 1.00th=[10290], 5.00th=[15664], 10.00th=[21365], 20.00th=[22414], 00:11:00.512 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:11:00.512 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25297], 95.00th=[26870], 00:11:00.512 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:11:00.512 | 99.99th=[42730] 00:11:00.512 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:11:00.512 slat (usec): min=5, max=24134, avg=182.00, stdev=1227.15 00:11:00.512 clat (usec): min=10695, max=36874, avg=21925.42, stdev=3046.79 00:11:00.512 lat (usec): min=10991, max=36900, avg=22107.42, stdev=2870.80 00:11:00.512 clat percentiles (usec): 00:11:00.512 | 1.00th=[13042], 5.00th=[19006], 10.00th=[19530], 20.00th=[20317], 00:11:00.512 | 30.00th=[21365], 40.00th=[21365], 50.00th=[21627], 60.00th=[21627], 00:11:00.512 | 70.00th=[22414], 80.00th=[22938], 90.00th=[23725], 95.00th=[26608], 00:11:00.512 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:11:00.512 | 99.99th=[36963] 00:11:00.513 bw ( KiB/s): min=11768, max=12312, per=19.11%, avg=12040.00, stdev=384.67, samples=2 00:11:00.513 iops : min= 2942, max= 3078, avg=3010.00, stdev=96.17, samples=2 00:11:00.513 lat (msec) : 2=0.02%, 10=0.23%, 20=13.26%, 50=86.50% 00:11:00.513 cpu : usr=2.88%, sys=7.84%, ctx=120, majf=0, minf=12 00:11:00.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:00.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.513 issued rwts: total=2623,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.513 job2: (groupid=0, jobs=1): err= 0: pid=80365: Fri Nov 29 16:46:24 2024 00:11:00.513 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:11:00.513 slat (usec): min=6, max=6965, avg=101.87, stdev=643.87 00:11:00.513 clat (usec): min=7528, max=22051, avg=14126.43, stdev=1476.53 00:11:00.513 lat (usec): min=7543, max=26539, avg=14228.29, stdev=1488.44 00:11:00.513 clat percentiles (usec): 00:11:00.513 | 1.00th=[ 8979], 5.00th=[12387], 10.00th=[13304], 20.00th=[13698], 00:11:00.513 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14222], 60.00th=[14353], 00:11:00.513 | 70.00th=[14484], 80.00th=[14746], 90.00th=[14877], 95.00th=[15008], 00:11:00.513 | 99.00th=[20841], 99.50th=[21627], 99.90th=[21890], 99.95th=[22152], 00:11:00.513 | 99.99th=[22152] 00:11:00.513 write: IOPS=4895, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1006msec); 0 zone resets 00:11:00.513 slat (usec): min=10, max=9408, avg=101.42, stdev=616.00 00:11:00.513 clat (usec): min=628, max=17756, avg=12670.68, stdev=1468.83 00:11:00.513 lat (usec): min=5658, max=17780, avg=12772.10, stdev=1370.08 00:11:00.513 clat percentiles (usec): 00:11:00.513 | 1.00th=[ 6915], 5.00th=[10683], 10.00th=[11338], 20.00th=[11994], 00:11:00.513 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:11:00.513 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091], 00:11:00.513 | 99.00th=[17695], 99.50th=[17695], 99.90th=[17695], 99.95th=[17695], 00:11:00.513 | 99.99th=[17695] 00:11:00.513 bw ( KiB/s): min=17920, max=20496, per=30.49%, avg=19208.00, stdev=1821.51, samples=2 00:11:00.513 iops : min= 4480, max= 5124, avg=4802.00, stdev=455.38, samples=2 00:11:00.513 lat (usec) : 750=0.01% 00:11:00.513 lat (msec) : 10=4.42%, 20=94.80%, 50=0.78% 00:11:00.513 cpu : usr=4.28%, sys=12.44%, ctx=203, majf=0, minf=15 00:11:00.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:00.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.513 issued rwts: total=4608,4925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.513 job3: (groupid=0, jobs=1): err= 0: pid=80366: Fri Nov 29 16:46:24 2024 00:11:00.513 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:11:00.513 slat (usec): min=7, max=7092, avg=102.02, stdev=648.79 00:11:00.513 clat (usec): min=8235, max=22276, avg=14186.10, stdev=1568.28 00:11:00.513 lat (usec): min=8250, max=26860, avg=14288.13, stdev=1595.84 00:11:00.513 clat percentiles (usec): 00:11:00.513 | 1.00th=[ 8848], 5.00th=[12780], 10.00th=[13304], 20.00th=[13698], 00:11:00.513 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14222], 60.00th=[14353], 00:11:00.513 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15139], 95.00th=[15926], 00:11:00.513 | 99.00th=[21365], 99.50th=[22152], 99.90th=[22152], 99.95th=[22152], 00:11:00.513 | 99.99th=[22152] 00:11:00.513 write: IOPS=4827, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1006msec); 0 zone resets 00:11:00.513 slat (usec): min=8, max=10031, avg=101.58, stdev=605.73 00:11:00.513 clat (usec): min=5266, max=18662, avg=12790.77, stdev=1441.21 00:11:00.513 lat (usec): min=5286, max=18687, avg=12892.35, stdev=1347.20 00:11:00.513 clat percentiles (usec): 00:11:00.513 | 1.00th=[ 6456], 5.00th=[11207], 10.00th=[11600], 20.00th=[12125], 00:11:00.513 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:11:00.513 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13698], 95.00th=[14091], 00:11:00.513 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:11:00.513 | 99.99th=[18744] 00:11:00.513 bw ( KiB/s): min=17360, max=20472, per=30.02%, avg=18916.00, stdev=2200.52, samples=2 00:11:00.513 iops : min= 4340, max= 5118, avg=4729.00, stdev=550.13, samples=2 00:11:00.513 lat (msec) : 10=3.53%, 20=95.68%, 50=0.79% 00:11:00.513 cpu : usr=3.58%, sys=13.73%, ctx=206, majf=0, minf=9 00:11:00.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:00.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.513 issued rwts: total=4608,4856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.513 00:11:00.513 Run status group 0 (all jobs): 00:11:00.513 READ: bw=56.1MiB/s (58.9MB/s), 10.2MiB/s-17.9MiB/s (10.6MB/s-18.8MB/s), io=56.7MiB (59.5MB), run=1006-1011msec 00:11:00.513 WRITE: bw=61.5MiB/s (64.5MB/s), 11.9MiB/s-19.1MiB/s (12.4MB/s-20.1MB/s), io=62.2MiB (65.2MB), run=1006-1011msec 00:11:00.513 00:11:00.513 Disk stats (read/write): 00:11:00.513 nvme0n1: ios=2298/2560, merge=0/0, ticks=52085/51049, in_queue=103134, util=87.98% 00:11:00.513 nvme0n2: ios=2233/2560, merge=0/0, ticks=49084/53615, in_queue=102699, util=88.78% 00:11:00.513 nvme0n3: ios=3962/4096, merge=0/0, ticks=52646/48547, in_queue=101193, util=89.12% 00:11:00.513 nvme0n4: ios=3916/4096, merge=0/0, ticks=52777/48867, in_queue=101644, util=89.89% 00:11:00.513 16:46:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:00.513 16:46:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=80379 00:11:00.513 16:46:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:00.513 16:46:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:00.513 [global] 00:11:00.513 thread=1 00:11:00.513 invalidate=1 00:11:00.513 rw=read 00:11:00.513 time_based=1 00:11:00.513 runtime=10 00:11:00.513 ioengine=libaio 00:11:00.513 direct=1 00:11:00.513 bs=4096 00:11:00.513 iodepth=1 00:11:00.513 norandommap=1 00:11:00.513 numjobs=1 00:11:00.513 00:11:00.513 [job0] 00:11:00.513 filename=/dev/nvme0n1 00:11:00.513 [job1] 00:11:00.513 filename=/dev/nvme0n2 00:11:00.513 [job2] 00:11:00.513 filename=/dev/nvme0n3 00:11:00.513 [job3] 00:11:00.513 filename=/dev/nvme0n4 00:11:00.513 Could not set queue depth (nvme0n1) 00:11:00.513 Could not set queue depth (nvme0n2) 00:11:00.513 Could not set queue depth (nvme0n3) 00:11:00.513 Could not set queue depth (nvme0n4) 00:11:00.513 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.513 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.513 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.513 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.513 fio-3.35 00:11:00.513 Starting 4 threads 00:11:03.797 16:46:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:03.797 fio: pid=80422, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:03.797 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=40529920, buflen=4096 00:11:03.797 16:46:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:04.055 fio: pid=80421, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:04.055 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=71180288, buflen=4096 00:11:04.055 16:46:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:04.055 16:46:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:04.314 fio: pid=80419, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:04.314 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=49786880, buflen=4096 00:11:04.314 16:46:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:04.314 16:46:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:04.573 fio: pid=80420, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:04.573 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=16982016, buflen=4096 00:11:04.573 00:11:04.573 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=80419: Fri Nov 29 16:46:28 2024 00:11:04.573 read: IOPS=3454, BW=13.5MiB/s (14.1MB/s)(47.5MiB/3519msec) 00:11:04.573 slat (usec): min=7, max=13661, avg=17.14, stdev=193.61 00:11:04.573 clat (usec): min=3, max=2414, avg=270.88, stdev=63.19 00:11:04.573 lat (usec): min=129, max=13875, avg=288.01, stdev=205.52 00:11:04.573 clat percentiles (usec): 00:11:04.573 | 1.00th=[ 149], 5.00th=[ 192], 10.00th=[ 210], 20.00th=[ 249], 00:11:04.573 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:11:04.573 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:11:04.573 | 99.00th=[ 429], 99.50th=[ 474], 99.90th=[ 824], 99.95th=[ 1582], 00:11:04.573 | 99.99th=[ 2278] 00:11:04.573 bw ( KiB/s): min=13032, max=13632, per=21.17%, avg=13386.67, stdev=239.66, samples=6 00:11:04.573 iops : min= 3258, max= 3408, avg=3346.67, stdev=59.92, samples=6 00:11:04.573 lat (usec) : 4=0.02%, 250=20.73%, 500=78.81%, 750=0.30%, 1000=0.05% 00:11:04.573 lat (msec) : 2=0.06%, 4=0.02% 00:11:04.573 cpu : usr=1.08%, sys=4.24%, ctx=12181, majf=0, minf=1 00:11:04.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.573 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.573 issued rwts: total=12156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.573 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=80420: Fri Nov 29 16:46:28 2024 00:11:04.573 read: IOPS=5412, BW=21.1MiB/s (22.2MB/s)(80.2MiB/3793msec) 00:11:04.573 slat (usec): min=7, max=14474, avg=15.18, stdev=148.44 00:11:04.573 clat (usec): min=3, max=13165, avg=168.22, stdev=112.18 00:11:04.573 lat (usec): min=119, max=14671, avg=183.40, stdev=187.02 00:11:04.573 clat percentiles (usec): 00:11:04.573 | 1.00th=[ 125], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 151], 00:11:04.573 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:11:04.573 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 196], 95.00th=[ 217], 00:11:04.573 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 644], 99.95th=[ 1434], 00:11:04.574 | 99.99th=[ 3425] 00:11:04.574 bw ( KiB/s): min=17180, max=22944, per=34.35%, avg=21722.86, stdev=2106.62, samples=7 00:11:04.574 iops : min= 4295, max= 5736, avg=5430.71, stdev=526.65, samples=7 00:11:04.574 lat (usec) : 4=0.01%, 10=0.01%, 50=0.01%, 250=99.05%, 500=0.77% 00:11:04.574 lat (usec) : 750=0.08%, 1000=0.02% 00:11:04.574 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01%, 20=0.01% 00:11:04.574 cpu : usr=1.42%, sys=6.54%, ctx=20565, majf=0, minf=1 00:11:04.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.574 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.574 issued rwts: total=20531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.574 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=80421: Fri Nov 29 16:46:28 2024 00:11:04.574 read: IOPS=5335, BW=20.8MiB/s (21.9MB/s)(67.9MiB/3257msec) 00:11:04.574 slat (usec): min=10, max=8430, avg=13.84, stdev=82.14 00:11:04.574 clat (usec): min=136, max=2095, avg=172.14, stdev=30.85 00:11:04.574 lat (usec): min=148, max=8619, avg=185.98, stdev=87.93 00:11:04.574 clat percentiles (usec): 00:11:04.574 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:11:04.574 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:11:04.574 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:11:04.574 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 297], 99.95th=[ 502], 00:11:04.574 | 99.99th=[ 2089] 00:11:04.574 bw ( KiB/s): min=20392, max=21952, per=33.92%, avg=21450.67, stdev=622.64, samples=6 00:11:04.574 iops : min= 5098, max= 5488, avg=5362.67, stdev=155.66, samples=6 00:11:04.574 lat (usec) : 250=99.86%, 500=0.08%, 750=0.02%, 1000=0.01% 00:11:04.574 lat (msec) : 2=0.02%, 4=0.01% 00:11:04.574 cpu : usr=1.35%, sys=6.54%, ctx=17381, majf=0, minf=2 00:11:04.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.574 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.574 issued rwts: total=17379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.574 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=80422: Fri Nov 29 16:46:28 2024 00:11:04.574 read: IOPS=3340, BW=13.0MiB/s (13.7MB/s)(38.7MiB/2962msec) 00:11:04.574 slat (usec): min=10, max=151, avg=13.31, stdev= 3.50 00:11:04.574 clat (usec): min=149, max=3810, avg=284.45, stdev=53.08 00:11:04.574 lat (usec): min=161, max=3832, avg=297.77, stdev=53.52 00:11:04.574 clat percentiles (usec): 00:11:04.574 | 1.00th=[ 239], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:11:04.574 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:11:04.574 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 322], 00:11:04.574 | 99.00th=[ 429], 99.50th=[ 457], 99.90th=[ 660], 99.95th=[ 889], 00:11:04.574 | 99.99th=[ 3818] 00:11:04.574 bw ( KiB/s): min=12936, max=13664, per=21.21%, avg=13408.00, stdev=292.57, samples=5 00:11:04.574 iops : min= 3234, max= 3416, avg=3352.00, stdev=73.14, samples=5 00:11:04.574 lat (usec) : 250=3.90%, 500=95.84%, 750=0.19%, 1000=0.02% 00:11:04.574 lat (msec) : 2=0.03%, 4=0.01% 00:11:04.574 cpu : usr=1.11%, sys=3.88%, ctx=9896, majf=0, minf=2 00:11:04.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.574 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.574 issued rwts: total=9896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.574 00:11:04.574 Run status group 0 (all jobs): 00:11:04.574 READ: bw=61.7MiB/s (64.7MB/s), 13.0MiB/s-21.1MiB/s (13.7MB/s-22.2MB/s), io=234MiB (246MB), run=2962-3793msec 00:11:04.574 00:11:04.574 Disk stats (read/write): 00:11:04.574 nvme0n1: ios=11495/0, merge=0/0, ticks=3148/0, in_queue=3148, util=95.36% 00:11:04.574 nvme0n2: ios=19523/0, merge=0/0, ticks=3284/0, in_queue=3284, util=95.80% 00:11:04.574 nvme0n3: ios=16659/0, merge=0/0, ticks=2898/0, in_queue=2898, util=96.43% 00:11:04.574 nvme0n4: ios=9613/0, merge=0/0, ticks=2740/0, in_queue=2740, util=96.80% 00:11:04.574 16:46:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:04.574 16:46:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:04.833 16:46:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:04.833 16:46:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:05.092 16:46:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.092 16:46:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:05.351 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.351 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:05.610 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.610 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:05.870 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:05.870 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 80379 00:11:05.870 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:05.870 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.870 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:05.870 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:05.870 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:05.870 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.870 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:05.870 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.129 nvmf hotplug test: fio failed as expected 00:11:06.129 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:06.129 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:06.129 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:06.129 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.129 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.388 rmmod nvme_tcp 00:11:06.388 rmmod nvme_fabrics 00:11:06.388 rmmod nvme_keyring 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 79991 ']' 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 79991 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 79991 ']' 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 79991 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.388 16:46:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79991 00:11:06.388 killing process with pid 79991 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79991' 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 79991 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 79991 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:06.388 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:06.647 00:11:06.647 real 0m20.013s 00:11:06.647 user 1m15.198s 00:11:06.647 sys 0m10.108s 00:11:06.647 ************************************ 00:11:06.647 END TEST nvmf_fio_target 00:11:06.647 ************************************ 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.647 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.906 16:46:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:06.906 16:46:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.906 16:46:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.906 16:46:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:06.906 ************************************ 00:11:06.906 START TEST nvmf_bdevio 00:11:06.906 ************************************ 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:06.907 * Looking for test storage... 00:11:06.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:06.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.907 --rc genhtml_branch_coverage=1 00:11:06.907 --rc genhtml_function_coverage=1 00:11:06.907 --rc genhtml_legend=1 00:11:06.907 --rc geninfo_all_blocks=1 00:11:06.907 --rc geninfo_unexecuted_blocks=1 00:11:06.907 00:11:06.907 ' 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:06.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.907 --rc genhtml_branch_coverage=1 00:11:06.907 --rc genhtml_function_coverage=1 00:11:06.907 --rc genhtml_legend=1 00:11:06.907 --rc geninfo_all_blocks=1 00:11:06.907 --rc geninfo_unexecuted_blocks=1 00:11:06.907 00:11:06.907 ' 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:06.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.907 --rc genhtml_branch_coverage=1 00:11:06.907 --rc genhtml_function_coverage=1 00:11:06.907 --rc genhtml_legend=1 00:11:06.907 --rc geninfo_all_blocks=1 00:11:06.907 --rc geninfo_unexecuted_blocks=1 00:11:06.907 00:11:06.907 ' 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:06.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.907 --rc genhtml_branch_coverage=1 00:11:06.907 --rc genhtml_function_coverage=1 00:11:06.907 --rc genhtml_legend=1 00:11:06.907 --rc geninfo_all_blocks=1 00:11:06.907 --rc geninfo_unexecuted_blocks=1 00:11:06.907 00:11:06.907 ' 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.907 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.908 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:06.908 Cannot find device "nvmf_init_br" 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:06.908 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:06.908 Cannot find device "nvmf_init_br2" 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:07.167 Cannot find device "nvmf_tgt_br" 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:07.167 Cannot find device "nvmf_tgt_br2" 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:07.167 Cannot find device "nvmf_init_br" 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:07.167 Cannot find device "nvmf_init_br2" 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:07.167 Cannot find device "nvmf_tgt_br" 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:07.167 Cannot find device "nvmf_tgt_br2" 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:07.167 Cannot find device "nvmf_br" 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:07.167 Cannot find device "nvmf_init_if" 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:07.167 Cannot find device "nvmf_init_if2" 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:07.167 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:07.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:07.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:07.168 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:07.427 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:07.427 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:07.427 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:07.427 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:07.427 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:07.427 16:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:07.427 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:07.427 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:11:07.427 00:11:07.427 --- 10.0.0.3 ping statistics --- 00:11:07.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.427 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:07.427 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:07.427 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:11:07.427 00:11:07.427 --- 10.0.0.4 ping statistics --- 00:11:07.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.427 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:07.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:07.427 00:11:07.427 --- 10.0.0.1 ping statistics --- 00:11:07.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.427 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:07.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:11:07.427 00:11:07.427 --- 10.0.0.2 ping statistics --- 00:11:07.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.427 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=80746 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 80746 00:11:07.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 80746 ']' 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.427 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:07.427 [2024-11-29 16:46:31.144042] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:07.427 [2024-11-29 16:46:31.144348] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.686 [2024-11-29 16:46:31.271379] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:07.686 [2024-11-29 16:46:31.298382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.686 [2024-11-29 16:46:31.320181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.686 [2024-11-29 16:46:31.320640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.686 [2024-11-29 16:46:31.321038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.686 [2024-11-29 16:46:31.321540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.686 [2024-11-29 16:46:31.321752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.686 [2024-11-29 16:46:31.322852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:07.686 [2024-11-29 16:46:31.322991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:07.686 [2024-11-29 16:46:31.323032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:07.686 [2024-11-29 16:46:31.323034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.686 [2024-11-29 16:46:31.355259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:07.686 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.686 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:07.686 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.686 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.686 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:07.686 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.686 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.686 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.687 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:07.687 [2024-11-29 16:46:31.442537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.687 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.687 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:07.687 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.687 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:07.945 Malloc0 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:07.945 [2024-11-29 16:46:31.501640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:07.945 { 00:11:07.945 "params": { 00:11:07.945 "name": "Nvme$subsystem", 00:11:07.945 "trtype": "$TEST_TRANSPORT", 00:11:07.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:07.945 "adrfam": "ipv4", 00:11:07.945 "trsvcid": "$NVMF_PORT", 00:11:07.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:07.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:07.945 "hdgst": ${hdgst:-false}, 00:11:07.945 "ddgst": ${ddgst:-false} 00:11:07.945 }, 00:11:07.945 "method": "bdev_nvme_attach_controller" 00:11:07.945 } 00:11:07.945 EOF 00:11:07.945 )") 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:07.945 16:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:07.945 "params": { 00:11:07.945 "name": "Nvme1", 00:11:07.945 "trtype": "tcp", 00:11:07.945 "traddr": "10.0.0.3", 00:11:07.945 "adrfam": "ipv4", 00:11:07.945 "trsvcid": "4420", 00:11:07.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:07.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:07.945 "hdgst": false, 00:11:07.945 "ddgst": false 00:11:07.945 }, 00:11:07.945 "method": "bdev_nvme_attach_controller" 00:11:07.945 }' 00:11:07.945 [2024-11-29 16:46:31.571799] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:11:07.946 [2024-11-29 16:46:31.571911] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80774 ] 00:11:07.946 [2024-11-29 16:46:31.704601] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:07.946 [2024-11-29 16:46:31.735860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:08.203 [2024-11-29 16:46:31.762305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.203 [2024-11-29 16:46:31.762410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.203 [2024-11-29 16:46:31.762418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.203 [2024-11-29 16:46:31.805737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.203 I/O targets: 00:11:08.203 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:08.203 00:11:08.203 00:11:08.203 CUnit - A unit testing framework for C - Version 2.1-3 00:11:08.203 http://cunit.sourceforge.net/ 00:11:08.203 00:11:08.203 00:11:08.203 Suite: bdevio tests on: Nvme1n1 00:11:08.203 Test: blockdev write read block ...passed 00:11:08.203 Test: blockdev write zeroes read block ...passed 00:11:08.203 Test: blockdev write zeroes read no split ...passed 00:11:08.203 Test: blockdev write zeroes read split ...passed 00:11:08.203 Test: blockdev write zeroes read split partial ...passed 00:11:08.203 Test: blockdev reset ...[2024-11-29 16:46:31.942303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:08.204 [2024-11-29 16:46:31.942441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa7e80 (9): Bad file descriptor 00:11:08.204 [2024-11-29 16:46:31.958058] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:08.204 passed 00:11:08.204 Test: blockdev write read 8 blocks ...passed 00:11:08.204 Test: blockdev write read size > 128k ...passed 00:11:08.204 Test: blockdev write read invalid size ...passed 00:11:08.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:08.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:08.204 Test: blockdev write read max offset ...passed 00:11:08.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:08.204 Test: blockdev writev readv 8 blocks ...passed 00:11:08.204 Test: blockdev writev readv 30 x 1block ...passed 00:11:08.204 Test: blockdev writev readv block ...passed 00:11:08.204 Test: blockdev writev readv size > 128k ...passed 00:11:08.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:08.204 Test: blockdev comparev and writev ...[2024-11-29 16:46:31.967120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:08.204 [2024-11-29 16:46:31.967349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:08.204 [2024-11-29 16:46:31.967386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:08.204 [2024-11-29 16:46:31.967399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:08.204 [2024-11-29 16:46:31.967746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:08.204 [2024-11-29 16:46:31.967769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:08.204 [2024-11-29 16:46:31.967790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:08.204 [2024-11-29 16:46:31.967802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:08.204 [2024-11-29 16:46:31.968095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:08.204 [2024-11-29 16:46:31.968116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:08.204 [2024-11-29 16:46:31.968136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:08.204 [2024-11-29 16:46:31.968148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:08.204 [2024-11-29 16:46:31.968463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:08.204 [2024-11-29 16:46:31.968486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:08.204 [2024-11-29 16:46:31.968514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:08.204 [2024-11-29 16:46:31.968534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:08.204 passed 00:11:08.204 Test: blockdev nvme passthru rw ...passed 00:11:08.204 Test: blockdev nvme passthru vendor specific ...[2024-11-29 16:46:31.969777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOpassed 00:11:08.204 Test: blockdev nvme admin passthru ...CK OFFSET 0x0 len:0x0 00:11:08.204 [2024-11-29 16:46:31.969948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:08.204 [2024-11-29 16:46:31.970107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:08.204 [2024-11-29 16:46:31.970137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:08.204 [2024-11-29 16:46:31.970258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:08.204 [2024-11-29 16:46:31.970283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:08.204 [2024-11-29 16:46:31.970443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:08.204 [2024-11-29 16:46:31.970471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:08.204 passed 00:11:08.204 Test: blockdev copy ...passed 00:11:08.204 00:11:08.204 Run Summary: Type Total Ran Passed Failed Inactive 00:11:08.204 suites 1 1 n/a 0 0 00:11:08.204 tests 23 23 23 0 0 00:11:08.204 asserts 152 152 152 0 n/a 00:11:08.204 00:11:08.204 Elapsed time = 0.149 seconds 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.463 rmmod nvme_tcp 00:11:08.463 rmmod nvme_fabrics 00:11:08.463 rmmod nvme_keyring 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 80746 ']' 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 80746 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 80746 ']' 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 80746 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.463 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80746 00:11:08.722 killing process with pid 80746 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80746' 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 80746 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 80746 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:08.722 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:08.981 00:11:08.981 real 0m2.204s 00:11:08.981 user 0m5.443s 00:11:08.981 sys 0m0.769s 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:08.981 ************************************ 00:11:08.981 END TEST nvmf_bdevio 00:11:08.981 ************************************ 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:08.981 00:11:08.981 real 2m31.151s 00:11:08.981 user 6m32.434s 00:11:08.981 sys 0m52.439s 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.981 ************************************ 00:11:08.981 END TEST nvmf_target_core 00:11:08.981 ************************************ 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:08.981 16:46:32 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:08.981 16:46:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:08.981 16:46:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.981 16:46:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:08.981 ************************************ 00:11:08.981 START TEST nvmf_target_extra 00:11:08.981 ************************************ 00:11:08.981 16:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:09.240 * Looking for test storage... 00:11:09.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:09.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.240 --rc genhtml_branch_coverage=1 00:11:09.240 --rc genhtml_function_coverage=1 00:11:09.240 --rc genhtml_legend=1 00:11:09.240 --rc geninfo_all_blocks=1 00:11:09.240 --rc geninfo_unexecuted_blocks=1 00:11:09.240 00:11:09.240 ' 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:09.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.240 --rc genhtml_branch_coverage=1 00:11:09.240 --rc genhtml_function_coverage=1 00:11:09.240 --rc genhtml_legend=1 00:11:09.240 --rc geninfo_all_blocks=1 00:11:09.240 --rc geninfo_unexecuted_blocks=1 00:11:09.240 00:11:09.240 ' 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:09.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.240 --rc genhtml_branch_coverage=1 00:11:09.240 --rc genhtml_function_coverage=1 00:11:09.240 --rc genhtml_legend=1 00:11:09.240 --rc geninfo_all_blocks=1 00:11:09.240 --rc geninfo_unexecuted_blocks=1 00:11:09.240 00:11:09.240 ' 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:09.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.240 --rc genhtml_branch_coverage=1 00:11:09.240 --rc genhtml_function_coverage=1 00:11:09.240 --rc genhtml_legend=1 00:11:09.240 --rc geninfo_all_blocks=1 00:11:09.240 --rc geninfo_unexecuted_blocks=1 00:11:09.240 00:11:09.240 ' 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.240 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.241 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:09.241 ************************************ 00:11:09.241 START TEST nvmf_auth_target 00:11:09.241 ************************************ 00:11:09.241 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:09.501 * Looking for test storage... 00:11:09.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:09.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.501 --rc genhtml_branch_coverage=1 00:11:09.501 --rc genhtml_function_coverage=1 00:11:09.501 --rc genhtml_legend=1 00:11:09.501 --rc geninfo_all_blocks=1 00:11:09.501 --rc geninfo_unexecuted_blocks=1 00:11:09.501 00:11:09.501 ' 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:09.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.501 --rc genhtml_branch_coverage=1 00:11:09.501 --rc genhtml_function_coverage=1 00:11:09.501 --rc genhtml_legend=1 00:11:09.501 --rc geninfo_all_blocks=1 00:11:09.501 --rc geninfo_unexecuted_blocks=1 00:11:09.501 00:11:09.501 ' 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:09.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.501 --rc genhtml_branch_coverage=1 00:11:09.501 --rc genhtml_function_coverage=1 00:11:09.501 --rc genhtml_legend=1 00:11:09.501 --rc geninfo_all_blocks=1 00:11:09.501 --rc geninfo_unexecuted_blocks=1 00:11:09.501 00:11:09.501 ' 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:09.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.501 --rc genhtml_branch_coverage=1 00:11:09.501 --rc genhtml_function_coverage=1 00:11:09.501 --rc genhtml_legend=1 00:11:09.501 --rc geninfo_all_blocks=1 00:11:09.501 --rc geninfo_unexecuted_blocks=1 00:11:09.501 00:11:09.501 ' 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:09.501 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.502 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:09.502 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:09.503 Cannot find device "nvmf_init_br" 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:09.503 Cannot find device "nvmf_init_br2" 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:09.503 Cannot find device "nvmf_tgt_br" 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:09.503 Cannot find device "nvmf_tgt_br2" 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:09.503 Cannot find device "nvmf_init_br" 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:09.503 Cannot find device "nvmf_init_br2" 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:09.503 Cannot find device "nvmf_tgt_br" 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:09.503 Cannot find device "nvmf_tgt_br2" 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:09.503 Cannot find device "nvmf_br" 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:09.503 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:09.761 Cannot find device "nvmf_init_if" 00:11:09.761 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:09.761 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:09.761 Cannot find device "nvmf_init_if2" 00:11:09.761 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:09.761 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:09.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.761 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:09.761 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:09.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.761 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:09.761 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:09.762 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:10.020 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:10.020 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:10.020 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:10.020 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:10.020 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:10.020 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:10.020 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:10.020 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:10.020 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:11:10.020 00:11:10.020 --- 10.0.0.3 ping statistics --- 00:11:10.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.020 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:10.020 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:10.020 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:10.020 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:11:10.020 00:11:10.020 --- 10.0.0.4 ping statistics --- 00:11:10.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.020 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:10.020 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:10.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:10.020 00:11:10.020 --- 10.0.0.1 ping statistics --- 00:11:10.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.020 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:10.020 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:10.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:11:10.020 00:11:10.020 --- 10.0.0.2 ping statistics --- 00:11:10.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.020 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:10.020 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.020 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:11:10.020 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.020 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=81056 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 81056 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81056 ']' 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.021 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=81082 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=58effe4be2d4a61bbc92e4758435d6207258433c3319b3cb 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1rX 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 58effe4be2d4a61bbc92e4758435d6207258433c3319b3cb 0 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 58effe4be2d4a61bbc92e4758435d6207258433c3319b3cb 0 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=58effe4be2d4a61bbc92e4758435d6207258433c3319b3cb 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:11:10.280 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1rX 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1rX 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.1rX 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ce533961581525cc7b1e0e7ace450f0ba6bd9594eafb482bebb8325e9928cdcf 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SiQ 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ce533961581525cc7b1e0e7ace450f0ba6bd9594eafb482bebb8325e9928cdcf 3 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ce533961581525cc7b1e0e7ace450f0ba6bd9594eafb482bebb8325e9928cdcf 3 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ce533961581525cc7b1e0e7ace450f0ba6bd9594eafb482bebb8325e9928cdcf 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:10.280 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SiQ 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SiQ 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.SiQ 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d904e1bb643394b12a80e33081f82e8a 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.b24 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d904e1bb643394b12a80e33081f82e8a 1 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d904e1bb643394b12a80e33081f82e8a 1 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d904e1bb643394b12a80e33081f82e8a 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.b24 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.b24 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.b24 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:10.539 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=406d78acdc024c50c7cdb47a8fe4e5123f7a462dd25932aa 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Twt 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 406d78acdc024c50c7cdb47a8fe4e5123f7a462dd25932aa 2 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 406d78acdc024c50c7cdb47a8fe4e5123f7a462dd25932aa 2 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=406d78acdc024c50c7cdb47a8fe4e5123f7a462dd25932aa 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Twt 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Twt 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Twt 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b105acebad8b427e6b9f180a308b51b7705f549d143454be 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ZMI 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b105acebad8b427e6b9f180a308b51b7705f549d143454be 2 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b105acebad8b427e6b9f180a308b51b7705f549d143454be 2 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b105acebad8b427e6b9f180a308b51b7705f549d143454be 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ZMI 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ZMI 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ZMI 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3aba33927cbfaf766bf77bfcf63a52d9 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Pt6 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3aba33927cbfaf766bf77bfcf63a52d9 1 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3aba33927cbfaf766bf77bfcf63a52d9 1 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3aba33927cbfaf766bf77bfcf63a52d9 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:10.540 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Pt6 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Pt6 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Pt6 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=77e4f5457710162af195c9bc14e5989a82d1d0adf7f2c487df090cc9eabf9285 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.UWq 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 77e4f5457710162af195c9bc14e5989a82d1d0adf7f2c487df090cc9eabf9285 3 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 77e4f5457710162af195c9bc14e5989a82d1d0adf7f2c487df090cc9eabf9285 3 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=77e4f5457710162af195c9bc14e5989a82d1d0adf7f2c487df090cc9eabf9285 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.UWq 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.UWq 00:11:10.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.UWq 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 81056 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81056 ']' 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.799 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:11.057 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.057 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:11.057 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 81082 /var/tmp/host.sock 00:11:11.057 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81082 ']' 00:11:11.057 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:11.057 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.057 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:11.057 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.057 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.624 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.624 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:11.624 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:11.624 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.624 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.624 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.624 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:11.624 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1rX 00:11:11.624 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.624 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.624 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.624 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1rX 00:11:11.624 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1rX 00:11:11.881 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.SiQ ]] 00:11:11.881 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SiQ 00:11:11.881 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.881 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.881 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.881 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SiQ 00:11:11.881 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SiQ 00:11:12.139 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:12.139 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.b24 00:11:12.139 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.139 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.139 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.139 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.b24 00:11:12.139 16:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.b24 00:11:12.397 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Twt ]] 00:11:12.397 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Twt 00:11:12.397 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.397 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.397 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.397 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Twt 00:11:12.397 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Twt 00:11:12.655 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:12.655 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZMI 00:11:12.656 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.656 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.656 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.656 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ZMI 00:11:12.656 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ZMI 00:11:12.914 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Pt6 ]] 00:11:12.914 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Pt6 00:11:12.914 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.914 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.914 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.914 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Pt6 00:11:12.914 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Pt6 00:11:13.172 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:13.173 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.UWq 00:11:13.173 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.173 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.173 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.173 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.UWq 00:11:13.173 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.UWq 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.741 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.308 00:11:14.308 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.308 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.308 16:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.567 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.567 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.567 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.567 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.567 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.567 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.567 { 00:11:14.567 "cntlid": 1, 00:11:14.567 "qid": 0, 00:11:14.567 "state": "enabled", 00:11:14.567 "thread": "nvmf_tgt_poll_group_000", 00:11:14.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:14.567 "listen_address": { 00:11:14.567 "trtype": "TCP", 00:11:14.567 "adrfam": "IPv4", 00:11:14.567 "traddr": "10.0.0.3", 00:11:14.567 "trsvcid": "4420" 00:11:14.567 }, 00:11:14.567 "peer_address": { 00:11:14.567 "trtype": "TCP", 00:11:14.567 "adrfam": "IPv4", 00:11:14.567 "traddr": "10.0.0.1", 00:11:14.567 "trsvcid": "46606" 00:11:14.567 }, 00:11:14.567 "auth": { 00:11:14.567 "state": "completed", 00:11:14.567 "digest": "sha256", 00:11:14.567 "dhgroup": "null" 00:11:14.567 } 00:11:14.567 } 00:11:14.567 ]' 00:11:14.567 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.567 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.567 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.567 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:14.567 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.567 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.567 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.567 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.826 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:11:14.826 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:11:20.095 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.095 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:20.095 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.095 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.095 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.095 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.095 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:20.095 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.095 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.095 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.354 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.354 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.354 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.354 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.354 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.354 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.354 { 00:11:20.354 "cntlid": 3, 00:11:20.354 "qid": 0, 00:11:20.354 "state": "enabled", 00:11:20.354 "thread": "nvmf_tgt_poll_group_000", 00:11:20.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:20.354 "listen_address": { 00:11:20.354 "trtype": "TCP", 00:11:20.354 "adrfam": "IPv4", 00:11:20.354 "traddr": "10.0.0.3", 00:11:20.354 "trsvcid": "4420" 00:11:20.354 }, 00:11:20.354 "peer_address": { 00:11:20.354 "trtype": "TCP", 00:11:20.354 "adrfam": "IPv4", 00:11:20.354 "traddr": "10.0.0.1", 00:11:20.354 "trsvcid": "46636" 00:11:20.354 }, 00:11:20.354 "auth": { 00:11:20.354 "state": "completed", 00:11:20.354 "digest": "sha256", 00:11:20.354 "dhgroup": "null" 00:11:20.354 } 00:11:20.354 } 00:11:20.354 ]' 00:11:20.354 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.354 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.354 16:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.354 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:20.354 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.354 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.354 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.354 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.613 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:11:20.613 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:11:21.547 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.547 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:21.547 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.547 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.547 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.547 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.547 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:21.547 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:21.804 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:21.804 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.804 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:21.804 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:21.804 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:21.804 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.804 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.804 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.804 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.804 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.804 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.804 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.804 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.061 00:11:22.061 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.061 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.061 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.319 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.319 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.319 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.319 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.319 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.319 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.319 { 00:11:22.319 "cntlid": 5, 00:11:22.319 "qid": 0, 00:11:22.320 "state": "enabled", 00:11:22.320 "thread": "nvmf_tgt_poll_group_000", 00:11:22.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:22.320 "listen_address": { 00:11:22.320 "trtype": "TCP", 00:11:22.320 "adrfam": "IPv4", 00:11:22.320 "traddr": "10.0.0.3", 00:11:22.320 "trsvcid": "4420" 00:11:22.320 }, 00:11:22.320 "peer_address": { 00:11:22.320 "trtype": "TCP", 00:11:22.320 "adrfam": "IPv4", 00:11:22.320 "traddr": "10.0.0.1", 00:11:22.320 "trsvcid": "46660" 00:11:22.320 }, 00:11:22.320 "auth": { 00:11:22.320 "state": "completed", 00:11:22.320 "digest": "sha256", 00:11:22.320 "dhgroup": "null" 00:11:22.320 } 00:11:22.320 } 00:11:22.320 ]' 00:11:22.320 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.320 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:22.320 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.320 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:22.320 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.578 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.578 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.578 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.837 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:11:22.837 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:11:23.403 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.403 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:23.403 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.403 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.403 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.403 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.403 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:23.403 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:23.662 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:23.662 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.662 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:23.662 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:23.662 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:23.662 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.662 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:11:23.662 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.662 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.662 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.662 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:23.662 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:23.662 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:23.921 00:11:24.180 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.180 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.180 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:24.438 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.438 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.438 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.438 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.438 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.438 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.438 { 00:11:24.438 "cntlid": 7, 00:11:24.438 "qid": 0, 00:11:24.438 "state": "enabled", 00:11:24.438 "thread": "nvmf_tgt_poll_group_000", 00:11:24.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:24.438 "listen_address": { 00:11:24.438 "trtype": "TCP", 00:11:24.438 "adrfam": "IPv4", 00:11:24.438 "traddr": "10.0.0.3", 00:11:24.438 "trsvcid": "4420" 00:11:24.438 }, 00:11:24.438 "peer_address": { 00:11:24.438 "trtype": "TCP", 00:11:24.438 "adrfam": "IPv4", 00:11:24.438 "traddr": "10.0.0.1", 00:11:24.438 "trsvcid": "41898" 00:11:24.438 }, 00:11:24.438 "auth": { 00:11:24.438 "state": "completed", 00:11:24.438 "digest": "sha256", 00:11:24.438 "dhgroup": "null" 00:11:24.438 } 00:11:24.438 } 00:11:24.438 ]' 00:11:24.438 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.438 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.438 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.438 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:24.438 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.438 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.438 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.438 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.697 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:11:24.697 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:11:25.632 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.632 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:25.632 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.632 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.632 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.632 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:25.632 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.632 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:25.632 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:25.911 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:25.911 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.911 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:25.911 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:25.911 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:25.911 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.911 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.911 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.911 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.911 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.911 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.911 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.911 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.170 00:11:26.170 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.170 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.170 16:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.428 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.428 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.428 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.428 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.428 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.428 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.428 { 00:11:26.428 "cntlid": 9, 00:11:26.428 "qid": 0, 00:11:26.428 "state": "enabled", 00:11:26.428 "thread": "nvmf_tgt_poll_group_000", 00:11:26.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:26.428 "listen_address": { 00:11:26.428 "trtype": "TCP", 00:11:26.428 "adrfam": "IPv4", 00:11:26.428 "traddr": "10.0.0.3", 00:11:26.428 "trsvcid": "4420" 00:11:26.428 }, 00:11:26.428 "peer_address": { 00:11:26.428 "trtype": "TCP", 00:11:26.428 "adrfam": "IPv4", 00:11:26.428 "traddr": "10.0.0.1", 00:11:26.428 "trsvcid": "41918" 00:11:26.428 }, 00:11:26.428 "auth": { 00:11:26.428 "state": "completed", 00:11:26.428 "digest": "sha256", 00:11:26.428 "dhgroup": "ffdhe2048" 00:11:26.428 } 00:11:26.428 } 00:11:26.428 ]' 00:11:26.428 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.428 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.428 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.687 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:26.687 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.687 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.687 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.687 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.945 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:11:26.945 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:11:27.512 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.512 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:27.512 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.512 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.512 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.512 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.512 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:27.512 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:27.771 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:27.771 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.771 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:27.771 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:27.771 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:27.771 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.771 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.771 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.771 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.771 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.771 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.771 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.771 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.339 00:11:28.339 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.339 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.339 16:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.339 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.339 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.339 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.339 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.339 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.339 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.339 { 00:11:28.339 "cntlid": 11, 00:11:28.339 "qid": 0, 00:11:28.339 "state": "enabled", 00:11:28.339 "thread": "nvmf_tgt_poll_group_000", 00:11:28.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:28.339 "listen_address": { 00:11:28.339 "trtype": "TCP", 00:11:28.339 "adrfam": "IPv4", 00:11:28.339 "traddr": "10.0.0.3", 00:11:28.339 "trsvcid": "4420" 00:11:28.339 }, 00:11:28.339 "peer_address": { 00:11:28.339 "trtype": "TCP", 00:11:28.339 "adrfam": "IPv4", 00:11:28.339 "traddr": "10.0.0.1", 00:11:28.339 "trsvcid": "41936" 00:11:28.339 }, 00:11:28.339 "auth": { 00:11:28.339 "state": "completed", 00:11:28.339 "digest": "sha256", 00:11:28.339 "dhgroup": "ffdhe2048" 00:11:28.339 } 00:11:28.339 } 00:11:28.339 ]' 00:11:28.339 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.598 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.598 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.598 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:28.598 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.598 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.598 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.598 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.858 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:11:28.858 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:11:29.427 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.427 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:29.427 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.427 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.427 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.427 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.427 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:29.427 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:29.997 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:29.997 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.997 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:29.997 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:29.997 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:29.997 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.997 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.997 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.997 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.997 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.997 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.997 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.997 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.256 00:11:30.256 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.256 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.256 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.516 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.517 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.517 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.517 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.517 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.517 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:30.517 { 00:11:30.517 "cntlid": 13, 00:11:30.517 "qid": 0, 00:11:30.517 "state": "enabled", 00:11:30.517 "thread": "nvmf_tgt_poll_group_000", 00:11:30.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:30.517 "listen_address": { 00:11:30.517 "trtype": "TCP", 00:11:30.517 "adrfam": "IPv4", 00:11:30.517 "traddr": "10.0.0.3", 00:11:30.517 "trsvcid": "4420" 00:11:30.517 }, 00:11:30.517 "peer_address": { 00:11:30.517 "trtype": "TCP", 00:11:30.517 "adrfam": "IPv4", 00:11:30.517 "traddr": "10.0.0.1", 00:11:30.517 "trsvcid": "41958" 00:11:30.517 }, 00:11:30.517 "auth": { 00:11:30.517 "state": "completed", 00:11:30.517 "digest": "sha256", 00:11:30.517 "dhgroup": "ffdhe2048" 00:11:30.517 } 00:11:30.517 } 00:11:30.517 ]' 00:11:30.517 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:30.517 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:30.517 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:30.517 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:30.517 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:30.517 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.517 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.517 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.087 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:11:31.087 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:11:31.692 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.692 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:31.692 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.692 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.692 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.692 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.692 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:31.692 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:31.951 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:31.951 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:31.951 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:31.951 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:31.951 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:31.951 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.951 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:11:31.951 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.951 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.951 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.951 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:31.951 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:31.951 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:32.209 00:11:32.209 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.209 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.209 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.468 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.468 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.468 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.468 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.468 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.468 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.468 { 00:11:32.468 "cntlid": 15, 00:11:32.468 "qid": 0, 00:11:32.468 "state": "enabled", 00:11:32.468 "thread": "nvmf_tgt_poll_group_000", 00:11:32.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:32.468 "listen_address": { 00:11:32.468 "trtype": "TCP", 00:11:32.468 "adrfam": "IPv4", 00:11:32.468 "traddr": "10.0.0.3", 00:11:32.468 "trsvcid": "4420" 00:11:32.468 }, 00:11:32.468 "peer_address": { 00:11:32.468 "trtype": "TCP", 00:11:32.468 "adrfam": "IPv4", 00:11:32.468 "traddr": "10.0.0.1", 00:11:32.468 "trsvcid": "41986" 00:11:32.468 }, 00:11:32.468 "auth": { 00:11:32.468 "state": "completed", 00:11:32.468 "digest": "sha256", 00:11:32.468 "dhgroup": "ffdhe2048" 00:11:32.468 } 00:11:32.468 } 00:11:32.468 ]' 00:11:32.468 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.468 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:32.468 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.727 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:32.727 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.727 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.727 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.727 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.985 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:11:32.985 16:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:11:33.553 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.553 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:33.553 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.553 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.812 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.812 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:33.812 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:33.812 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:33.812 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:33.812 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:33.812 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.813 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:33.813 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:33.813 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:33.813 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.813 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.813 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.813 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.813 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.813 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.813 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.813 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.380 00:11:34.380 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.380 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.380 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.639 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.640 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.640 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.640 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.640 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.640 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.640 { 00:11:34.640 "cntlid": 17, 00:11:34.640 "qid": 0, 00:11:34.640 "state": "enabled", 00:11:34.640 "thread": "nvmf_tgt_poll_group_000", 00:11:34.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:34.640 "listen_address": { 00:11:34.640 "trtype": "TCP", 00:11:34.640 "adrfam": "IPv4", 00:11:34.640 "traddr": "10.0.0.3", 00:11:34.640 "trsvcid": "4420" 00:11:34.640 }, 00:11:34.640 "peer_address": { 00:11:34.640 "trtype": "TCP", 00:11:34.640 "adrfam": "IPv4", 00:11:34.640 "traddr": "10.0.0.1", 00:11:34.640 "trsvcid": "55856" 00:11:34.640 }, 00:11:34.640 "auth": { 00:11:34.640 "state": "completed", 00:11:34.640 "digest": "sha256", 00:11:34.640 "dhgroup": "ffdhe3072" 00:11:34.640 } 00:11:34.640 } 00:11:34.640 ]' 00:11:34.640 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.640 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:34.640 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.640 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:34.640 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.640 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.640 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.640 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.208 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:11:35.208 16:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:11:35.775 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.775 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:35.775 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.775 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.775 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.775 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.775 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:35.775 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:36.034 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:36.034 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.034 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:36.034 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:36.034 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:36.034 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.034 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.034 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.034 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.034 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.034 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.035 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.035 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.293 00:11:36.293 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.293 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.293 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.552 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.552 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.552 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.552 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.552 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.552 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.552 { 00:11:36.552 "cntlid": 19, 00:11:36.552 "qid": 0, 00:11:36.552 "state": "enabled", 00:11:36.552 "thread": "nvmf_tgt_poll_group_000", 00:11:36.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:36.552 "listen_address": { 00:11:36.552 "trtype": "TCP", 00:11:36.552 "adrfam": "IPv4", 00:11:36.552 "traddr": "10.0.0.3", 00:11:36.552 "trsvcid": "4420" 00:11:36.552 }, 00:11:36.552 "peer_address": { 00:11:36.552 "trtype": "TCP", 00:11:36.552 "adrfam": "IPv4", 00:11:36.552 "traddr": "10.0.0.1", 00:11:36.552 "trsvcid": "55886" 00:11:36.552 }, 00:11:36.552 "auth": { 00:11:36.552 "state": "completed", 00:11:36.552 "digest": "sha256", 00:11:36.552 "dhgroup": "ffdhe3072" 00:11:36.552 } 00:11:36.552 } 00:11:36.552 ]' 00:11:36.552 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.552 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:36.552 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.552 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:36.552 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.811 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.811 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.811 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.811 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:11:36.811 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.747 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.315 00:11:38.315 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.315 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.315 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.574 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.574 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.574 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.574 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.574 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.574 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.574 { 00:11:38.574 "cntlid": 21, 00:11:38.574 "qid": 0, 00:11:38.574 "state": "enabled", 00:11:38.574 "thread": "nvmf_tgt_poll_group_000", 00:11:38.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:38.574 "listen_address": { 00:11:38.574 "trtype": "TCP", 00:11:38.574 "adrfam": "IPv4", 00:11:38.574 "traddr": "10.0.0.3", 00:11:38.574 "trsvcid": "4420" 00:11:38.574 }, 00:11:38.574 "peer_address": { 00:11:38.574 "trtype": "TCP", 00:11:38.574 "adrfam": "IPv4", 00:11:38.574 "traddr": "10.0.0.1", 00:11:38.574 "trsvcid": "55912" 00:11:38.574 }, 00:11:38.574 "auth": { 00:11:38.574 "state": "completed", 00:11:38.574 "digest": "sha256", 00:11:38.574 "dhgroup": "ffdhe3072" 00:11:38.574 } 00:11:38.574 } 00:11:38.574 ]' 00:11:38.574 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.574 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:38.574 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.574 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:38.574 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.574 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.574 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.574 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.833 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:11:38.833 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:11:39.401 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.401 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:39.401 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.401 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.401 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.401 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.401 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:39.401 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:39.969 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:39.969 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.969 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:39.969 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:39.969 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:39.969 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.969 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:11:39.969 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.969 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.969 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.969 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:39.969 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:39.969 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:40.228 00:11:40.228 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.228 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.229 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.487 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.487 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.488 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.488 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.488 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.488 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.488 { 00:11:40.488 "cntlid": 23, 00:11:40.488 "qid": 0, 00:11:40.488 "state": "enabled", 00:11:40.488 "thread": "nvmf_tgt_poll_group_000", 00:11:40.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:40.488 "listen_address": { 00:11:40.488 "trtype": "TCP", 00:11:40.488 "adrfam": "IPv4", 00:11:40.488 "traddr": "10.0.0.3", 00:11:40.488 "trsvcid": "4420" 00:11:40.488 }, 00:11:40.488 "peer_address": { 00:11:40.488 "trtype": "TCP", 00:11:40.488 "adrfam": "IPv4", 00:11:40.488 "traddr": "10.0.0.1", 00:11:40.488 "trsvcid": "55950" 00:11:40.488 }, 00:11:40.488 "auth": { 00:11:40.488 "state": "completed", 00:11:40.488 "digest": "sha256", 00:11:40.488 "dhgroup": "ffdhe3072" 00:11:40.488 } 00:11:40.488 } 00:11:40.488 ]' 00:11:40.488 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.488 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:40.488 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.488 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:40.488 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.488 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.488 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.488 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.057 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:11:41.057 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:11:41.624 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.624 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:41.624 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.624 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.624 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.624 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:41.624 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.624 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:41.624 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:41.882 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:41.882 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.882 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:41.882 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:41.882 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:41.882 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.882 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.882 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.882 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.882 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.882 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.882 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.882 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.140 00:11:42.140 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.140 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.140 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.707 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.707 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.707 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.707 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.707 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.707 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.707 { 00:11:42.707 "cntlid": 25, 00:11:42.707 "qid": 0, 00:11:42.707 "state": "enabled", 00:11:42.707 "thread": "nvmf_tgt_poll_group_000", 00:11:42.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:42.707 "listen_address": { 00:11:42.707 "trtype": "TCP", 00:11:42.707 "adrfam": "IPv4", 00:11:42.707 "traddr": "10.0.0.3", 00:11:42.707 "trsvcid": "4420" 00:11:42.707 }, 00:11:42.707 "peer_address": { 00:11:42.707 "trtype": "TCP", 00:11:42.707 "adrfam": "IPv4", 00:11:42.707 "traddr": "10.0.0.1", 00:11:42.707 "trsvcid": "55980" 00:11:42.707 }, 00:11:42.707 "auth": { 00:11:42.707 "state": "completed", 00:11:42.707 "digest": "sha256", 00:11:42.707 "dhgroup": "ffdhe4096" 00:11:42.707 } 00:11:42.707 } 00:11:42.707 ]' 00:11:42.707 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.707 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:42.707 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.707 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:42.707 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.707 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.707 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.707 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.965 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:11:42.965 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:11:43.530 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.530 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:43.530 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.530 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.530 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.530 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.530 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:43.530 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:43.788 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:43.788 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.788 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:43.788 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:43.788 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:43.788 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.788 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.789 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.789 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.046 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.046 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.047 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.047 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.305 00:11:44.305 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.305 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.305 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.563 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.563 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.563 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.563 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.563 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.563 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.563 { 00:11:44.563 "cntlid": 27, 00:11:44.563 "qid": 0, 00:11:44.563 "state": "enabled", 00:11:44.563 "thread": "nvmf_tgt_poll_group_000", 00:11:44.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:44.563 "listen_address": { 00:11:44.563 "trtype": "TCP", 00:11:44.563 "adrfam": "IPv4", 00:11:44.563 "traddr": "10.0.0.3", 00:11:44.563 "trsvcid": "4420" 00:11:44.563 }, 00:11:44.563 "peer_address": { 00:11:44.563 "trtype": "TCP", 00:11:44.563 "adrfam": "IPv4", 00:11:44.563 "traddr": "10.0.0.1", 00:11:44.563 "trsvcid": "48372" 00:11:44.563 }, 00:11:44.563 "auth": { 00:11:44.563 "state": "completed", 00:11:44.563 "digest": "sha256", 00:11:44.563 "dhgroup": "ffdhe4096" 00:11:44.563 } 00:11:44.563 } 00:11:44.563 ]' 00:11:44.563 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.563 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.563 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.822 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:44.822 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.822 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.822 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.822 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.082 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:11:45.082 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:11:45.671 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.671 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:45.671 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.671 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.671 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.671 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.671 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:45.671 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:45.930 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:45.930 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.930 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:45.930 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:45.930 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:45.930 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.930 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.930 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.930 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.930 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.930 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.930 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.930 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.190 00:11:46.190 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.190 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.190 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.450 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.450 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.450 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.450 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.450 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.450 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.450 { 00:11:46.450 "cntlid": 29, 00:11:46.450 "qid": 0, 00:11:46.450 "state": "enabled", 00:11:46.450 "thread": "nvmf_tgt_poll_group_000", 00:11:46.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:46.450 "listen_address": { 00:11:46.450 "trtype": "TCP", 00:11:46.450 "adrfam": "IPv4", 00:11:46.450 "traddr": "10.0.0.3", 00:11:46.450 "trsvcid": "4420" 00:11:46.450 }, 00:11:46.450 "peer_address": { 00:11:46.450 "trtype": "TCP", 00:11:46.450 "adrfam": "IPv4", 00:11:46.450 "traddr": "10.0.0.1", 00:11:46.450 "trsvcid": "48386" 00:11:46.450 }, 00:11:46.450 "auth": { 00:11:46.450 "state": "completed", 00:11:46.450 "digest": "sha256", 00:11:46.450 "dhgroup": "ffdhe4096" 00:11:46.450 } 00:11:46.450 } 00:11:46.450 ]' 00:11:46.450 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.710 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.710 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.710 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:46.710 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.710 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.710 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.710 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.969 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:11:46.969 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:47.906 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:48.473 00:11:48.473 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.473 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.473 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.732 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.732 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.732 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.732 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.732 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.732 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.732 { 00:11:48.732 "cntlid": 31, 00:11:48.732 "qid": 0, 00:11:48.732 "state": "enabled", 00:11:48.732 "thread": "nvmf_tgt_poll_group_000", 00:11:48.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:48.732 "listen_address": { 00:11:48.732 "trtype": "TCP", 00:11:48.732 "adrfam": "IPv4", 00:11:48.732 "traddr": "10.0.0.3", 00:11:48.732 "trsvcid": "4420" 00:11:48.732 }, 00:11:48.732 "peer_address": { 00:11:48.732 "trtype": "TCP", 00:11:48.732 "adrfam": "IPv4", 00:11:48.732 "traddr": "10.0.0.1", 00:11:48.732 "trsvcid": "48414" 00:11:48.732 }, 00:11:48.732 "auth": { 00:11:48.732 "state": "completed", 00:11:48.732 "digest": "sha256", 00:11:48.732 "dhgroup": "ffdhe4096" 00:11:48.732 } 00:11:48.732 } 00:11:48.732 ]' 00:11:48.732 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.732 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.732 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.732 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:48.732 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.732 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.732 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.732 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.300 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:11:49.300 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:11:49.558 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.558 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:49.558 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.558 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.817 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.386 00:11:50.386 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.386 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.386 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.644 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.644 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.644 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.644 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.644 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.644 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.644 { 00:11:50.644 "cntlid": 33, 00:11:50.644 "qid": 0, 00:11:50.644 "state": "enabled", 00:11:50.644 "thread": "nvmf_tgt_poll_group_000", 00:11:50.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:50.644 "listen_address": { 00:11:50.644 "trtype": "TCP", 00:11:50.644 "adrfam": "IPv4", 00:11:50.644 "traddr": "10.0.0.3", 00:11:50.644 "trsvcid": "4420" 00:11:50.644 }, 00:11:50.644 "peer_address": { 00:11:50.644 "trtype": "TCP", 00:11:50.644 "adrfam": "IPv4", 00:11:50.644 "traddr": "10.0.0.1", 00:11:50.644 "trsvcid": "48432" 00:11:50.644 }, 00:11:50.644 "auth": { 00:11:50.644 "state": "completed", 00:11:50.644 "digest": "sha256", 00:11:50.644 "dhgroup": "ffdhe6144" 00:11:50.644 } 00:11:50.644 } 00:11:50.644 ]' 00:11:50.644 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.644 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.644 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.645 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:50.645 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.645 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.645 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.645 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.212 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:11:51.212 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:11:51.793 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.793 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:51.793 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.793 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.793 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.793 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.793 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:51.793 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:52.052 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:52.052 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.052 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:52.052 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:52.052 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:52.052 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.052 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.052 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.052 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.052 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.052 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.052 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.052 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.621 00:11:52.621 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.621 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.621 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.889 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.889 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.889 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.889 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.889 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.889 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.889 { 00:11:52.889 "cntlid": 35, 00:11:52.889 "qid": 0, 00:11:52.889 "state": "enabled", 00:11:52.889 "thread": "nvmf_tgt_poll_group_000", 00:11:52.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:52.889 "listen_address": { 00:11:52.889 "trtype": "TCP", 00:11:52.889 "adrfam": "IPv4", 00:11:52.889 "traddr": "10.0.0.3", 00:11:52.889 "trsvcid": "4420" 00:11:52.889 }, 00:11:52.889 "peer_address": { 00:11:52.889 "trtype": "TCP", 00:11:52.889 "adrfam": "IPv4", 00:11:52.889 "traddr": "10.0.0.1", 00:11:52.889 "trsvcid": "34822" 00:11:52.889 }, 00:11:52.889 "auth": { 00:11:52.889 "state": "completed", 00:11:52.889 "digest": "sha256", 00:11:52.889 "dhgroup": "ffdhe6144" 00:11:52.889 } 00:11:52.889 } 00:11:52.889 ]' 00:11:52.889 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.889 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:52.889 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.889 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:52.889 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.889 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.889 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.889 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.148 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:11:53.148 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:11:54.085 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.085 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:54.085 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.085 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.085 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.085 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.085 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:54.085 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:54.085 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:54.085 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.086 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:54.086 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:54.086 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:54.086 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.086 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.086 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.086 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.086 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.086 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.086 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.086 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.653 00:11:54.653 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.653 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.654 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.913 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.913 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.913 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.913 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.913 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.913 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.913 { 00:11:54.913 "cntlid": 37, 00:11:54.913 "qid": 0, 00:11:54.913 "state": "enabled", 00:11:54.913 "thread": "nvmf_tgt_poll_group_000", 00:11:54.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:54.913 "listen_address": { 00:11:54.913 "trtype": "TCP", 00:11:54.913 "adrfam": "IPv4", 00:11:54.913 "traddr": "10.0.0.3", 00:11:54.913 "trsvcid": "4420" 00:11:54.913 }, 00:11:54.913 "peer_address": { 00:11:54.913 "trtype": "TCP", 00:11:54.913 "adrfam": "IPv4", 00:11:54.913 "traddr": "10.0.0.1", 00:11:54.913 "trsvcid": "34842" 00:11:54.913 }, 00:11:54.913 "auth": { 00:11:54.913 "state": "completed", 00:11:54.913 "digest": "sha256", 00:11:54.913 "dhgroup": "ffdhe6144" 00:11:54.913 } 00:11:54.913 } 00:11:54.913 ]' 00:11:54.913 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.913 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:54.913 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.913 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:54.913 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.172 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.172 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.172 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.172 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:11:55.172 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:11:55.740 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.740 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:55.740 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.740 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.999 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.999 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.999 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:55.999 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:56.259 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:56.259 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.259 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:56.259 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:56.259 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:56.259 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.259 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:11:56.259 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.259 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.259 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.259 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:56.259 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:56.259 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:56.518 00:11:56.518 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.518 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.518 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.776 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.776 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.776 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.776 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.776 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.776 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.776 { 00:11:56.776 "cntlid": 39, 00:11:56.776 "qid": 0, 00:11:56.776 "state": "enabled", 00:11:56.776 "thread": "nvmf_tgt_poll_group_000", 00:11:56.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:56.776 "listen_address": { 00:11:56.776 "trtype": "TCP", 00:11:56.776 "adrfam": "IPv4", 00:11:56.776 "traddr": "10.0.0.3", 00:11:56.776 "trsvcid": "4420" 00:11:56.776 }, 00:11:56.776 "peer_address": { 00:11:56.776 "trtype": "TCP", 00:11:56.776 "adrfam": "IPv4", 00:11:56.776 "traddr": "10.0.0.1", 00:11:56.776 "trsvcid": "34870" 00:11:56.776 }, 00:11:56.776 "auth": { 00:11:56.776 "state": "completed", 00:11:56.777 "digest": "sha256", 00:11:56.777 "dhgroup": "ffdhe6144" 00:11:56.777 } 00:11:56.777 } 00:11:56.777 ]' 00:11:56.777 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.035 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.035 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.035 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:57.035 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.035 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.035 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.035 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.294 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:11:57.294 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:11:57.861 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.121 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:11:58.121 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.121 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.121 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.121 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:58.121 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.121 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:58.121 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:58.381 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:58.381 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.381 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:58.381 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:58.381 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:58.381 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.381 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.381 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.381 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.381 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.381 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.381 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.381 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.948 00:11:58.948 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.948 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.948 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.207 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.207 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.207 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.207 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.207 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.207 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.207 { 00:11:59.207 "cntlid": 41, 00:11:59.207 "qid": 0, 00:11:59.207 "state": "enabled", 00:11:59.207 "thread": "nvmf_tgt_poll_group_000", 00:11:59.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:11:59.207 "listen_address": { 00:11:59.207 "trtype": "TCP", 00:11:59.207 "adrfam": "IPv4", 00:11:59.207 "traddr": "10.0.0.3", 00:11:59.207 "trsvcid": "4420" 00:11:59.207 }, 00:11:59.207 "peer_address": { 00:11:59.207 "trtype": "TCP", 00:11:59.207 "adrfam": "IPv4", 00:11:59.207 "traddr": "10.0.0.1", 00:11:59.207 "trsvcid": "34892" 00:11:59.207 }, 00:11:59.207 "auth": { 00:11:59.207 "state": "completed", 00:11:59.207 "digest": "sha256", 00:11:59.207 "dhgroup": "ffdhe8192" 00:11:59.207 } 00:11:59.207 } 00:11:59.207 ]' 00:11:59.207 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.207 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:59.207 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.466 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:59.466 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.466 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.466 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.466 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.724 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:11:59.724 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:00.376 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.376 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:00.376 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.376 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.376 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.376 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.376 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:00.376 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:00.634 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:12:00.634 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.634 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:00.634 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:00.634 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:00.634 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.634 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.634 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.634 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.634 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.634 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.634 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.634 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.201 00:12:01.461 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.461 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.461 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.720 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.720 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.720 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.720 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.720 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.720 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.720 { 00:12:01.720 "cntlid": 43, 00:12:01.720 "qid": 0, 00:12:01.720 "state": "enabled", 00:12:01.720 "thread": "nvmf_tgt_poll_group_000", 00:12:01.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:01.720 "listen_address": { 00:12:01.720 "trtype": "TCP", 00:12:01.720 "adrfam": "IPv4", 00:12:01.720 "traddr": "10.0.0.3", 00:12:01.720 "trsvcid": "4420" 00:12:01.720 }, 00:12:01.720 "peer_address": { 00:12:01.720 "trtype": "TCP", 00:12:01.720 "adrfam": "IPv4", 00:12:01.720 "traddr": "10.0.0.1", 00:12:01.720 "trsvcid": "34902" 00:12:01.720 }, 00:12:01.720 "auth": { 00:12:01.720 "state": "completed", 00:12:01.720 "digest": "sha256", 00:12:01.720 "dhgroup": "ffdhe8192" 00:12:01.720 } 00:12:01.720 } 00:12:01.720 ]' 00:12:01.720 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.720 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:01.720 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.720 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:01.720 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.720 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.720 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.720 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.287 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:02.287 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:02.856 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.856 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:02.856 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.856 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.856 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.856 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.856 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:02.856 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:03.115 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:12:03.115 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.115 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:03.115 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:03.115 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:03.115 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.115 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.115 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.115 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.115 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.115 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.115 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.115 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.681 00:12:03.681 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.681 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.681 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.939 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.939 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.939 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.939 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.939 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.939 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.939 { 00:12:03.939 "cntlid": 45, 00:12:03.939 "qid": 0, 00:12:03.939 "state": "enabled", 00:12:03.939 "thread": "nvmf_tgt_poll_group_000", 00:12:03.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:03.939 "listen_address": { 00:12:03.939 "trtype": "TCP", 00:12:03.939 "adrfam": "IPv4", 00:12:03.939 "traddr": "10.0.0.3", 00:12:03.939 "trsvcid": "4420" 00:12:03.939 }, 00:12:03.939 "peer_address": { 00:12:03.939 "trtype": "TCP", 00:12:03.939 "adrfam": "IPv4", 00:12:03.939 "traddr": "10.0.0.1", 00:12:03.939 "trsvcid": "60420" 00:12:03.939 }, 00:12:03.939 "auth": { 00:12:03.939 "state": "completed", 00:12:03.939 "digest": "sha256", 00:12:03.939 "dhgroup": "ffdhe8192" 00:12:03.939 } 00:12:03.939 } 00:12:03.939 ]' 00:12:03.939 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.939 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:03.939 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.939 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:03.939 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.198 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.198 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.198 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.457 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:12:04.457 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:12:05.025 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.025 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:05.025 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.025 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.025 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.025 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.025 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:05.025 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:05.284 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:05.284 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.284 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:05.284 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:05.284 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:05.284 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.284 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:12:05.284 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.284 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.284 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.284 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:05.284 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:05.284 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:05.851 00:12:06.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.109 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.367 { 00:12:06.367 "cntlid": 47, 00:12:06.367 "qid": 0, 00:12:06.367 "state": "enabled", 00:12:06.367 "thread": "nvmf_tgt_poll_group_000", 00:12:06.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:06.367 "listen_address": { 00:12:06.367 "trtype": "TCP", 00:12:06.367 "adrfam": "IPv4", 00:12:06.367 "traddr": "10.0.0.3", 00:12:06.367 "trsvcid": "4420" 00:12:06.367 }, 00:12:06.367 "peer_address": { 00:12:06.367 "trtype": "TCP", 00:12:06.367 "adrfam": "IPv4", 00:12:06.367 "traddr": "10.0.0.1", 00:12:06.367 "trsvcid": "60458" 00:12:06.367 }, 00:12:06.367 "auth": { 00:12:06.367 "state": "completed", 00:12:06.367 "digest": "sha256", 00:12:06.367 "dhgroup": "ffdhe8192" 00:12:06.367 } 00:12:06.367 } 00:12:06.367 ]' 00:12:06.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:06.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.367 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:06.368 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.368 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.368 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.368 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.626 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:12:06.626 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:12:07.192 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.192 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:07.192 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.192 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.192 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.192 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:07.192 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:07.192 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.192 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:07.192 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:07.452 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:07.452 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.452 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:07.452 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:07.452 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:07.452 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.452 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.452 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.452 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.452 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.452 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.452 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.452 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.020 00:12:08.020 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.020 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.020 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.020 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.020 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.020 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.020 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.279 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.279 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.279 { 00:12:08.279 "cntlid": 49, 00:12:08.279 "qid": 0, 00:12:08.279 "state": "enabled", 00:12:08.279 "thread": "nvmf_tgt_poll_group_000", 00:12:08.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:08.279 "listen_address": { 00:12:08.279 "trtype": "TCP", 00:12:08.279 "adrfam": "IPv4", 00:12:08.279 "traddr": "10.0.0.3", 00:12:08.279 "trsvcid": "4420" 00:12:08.279 }, 00:12:08.279 "peer_address": { 00:12:08.279 "trtype": "TCP", 00:12:08.279 "adrfam": "IPv4", 00:12:08.279 "traddr": "10.0.0.1", 00:12:08.279 "trsvcid": "60494" 00:12:08.279 }, 00:12:08.279 "auth": { 00:12:08.279 "state": "completed", 00:12:08.279 "digest": "sha384", 00:12:08.279 "dhgroup": "null" 00:12:08.279 } 00:12:08.279 } 00:12:08.279 ]' 00:12:08.279 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.279 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:08.279 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.279 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:08.279 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.279 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.279 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.279 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.538 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:08.538 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:09.104 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.104 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:09.105 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.105 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.105 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.105 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.105 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:09.105 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:09.363 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:09.363 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.363 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:09.363 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:09.363 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:09.363 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.363 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.363 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.363 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.363 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.363 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.363 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.363 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.622 00:12:09.622 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.622 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.622 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.880 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.880 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.880 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.880 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.880 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.880 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.880 { 00:12:09.880 "cntlid": 51, 00:12:09.880 "qid": 0, 00:12:09.880 "state": "enabled", 00:12:09.880 "thread": "nvmf_tgt_poll_group_000", 00:12:09.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:09.880 "listen_address": { 00:12:09.880 "trtype": "TCP", 00:12:09.880 "adrfam": "IPv4", 00:12:09.880 "traddr": "10.0.0.3", 00:12:09.880 "trsvcid": "4420" 00:12:09.880 }, 00:12:09.880 "peer_address": { 00:12:09.880 "trtype": "TCP", 00:12:09.880 "adrfam": "IPv4", 00:12:09.880 "traddr": "10.0.0.1", 00:12:09.880 "trsvcid": "60522" 00:12:09.880 }, 00:12:09.880 "auth": { 00:12:09.880 "state": "completed", 00:12:09.880 "digest": "sha384", 00:12:09.880 "dhgroup": "null" 00:12:09.880 } 00:12:09.880 } 00:12:09.880 ]' 00:12:09.880 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:10.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.139 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.398 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:10.398 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:10.965 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.965 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:10.965 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.965 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.965 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.965 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.965 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:10.965 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:11.224 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:11.224 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.224 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:11.224 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:11.224 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:11.224 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.224 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.224 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.224 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.224 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.224 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.224 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.224 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.483 00:12:11.483 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.483 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.483 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.740 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.740 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.740 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.740 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.740 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.740 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.740 { 00:12:11.740 "cntlid": 53, 00:12:11.740 "qid": 0, 00:12:11.740 "state": "enabled", 00:12:11.740 "thread": "nvmf_tgt_poll_group_000", 00:12:11.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:11.740 "listen_address": { 00:12:11.740 "trtype": "TCP", 00:12:11.740 "adrfam": "IPv4", 00:12:11.740 "traddr": "10.0.0.3", 00:12:11.740 "trsvcid": "4420" 00:12:11.740 }, 00:12:11.740 "peer_address": { 00:12:11.740 "trtype": "TCP", 00:12:11.740 "adrfam": "IPv4", 00:12:11.740 "traddr": "10.0.0.1", 00:12:11.740 "trsvcid": "60550" 00:12:11.740 }, 00:12:11.740 "auth": { 00:12:11.740 "state": "completed", 00:12:11.740 "digest": "sha384", 00:12:11.740 "dhgroup": "null" 00:12:11.740 } 00:12:11.740 } 00:12:11.740 ]' 00:12:11.740 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.998 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.998 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.998 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:11.998 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.998 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.998 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.998 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.256 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:12:12.256 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:12:12.823 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.823 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:12.823 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.823 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.823 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.823 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.823 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:12.823 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:13.390 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:13.390 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.390 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:13.390 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:13.390 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:13.390 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.390 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:12:13.390 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.390 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.390 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.390 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:13.390 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:13.390 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:13.649 00:12:13.649 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.649 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.649 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.909 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.909 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.909 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.909 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.909 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.909 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.909 { 00:12:13.909 "cntlid": 55, 00:12:13.909 "qid": 0, 00:12:13.909 "state": "enabled", 00:12:13.909 "thread": "nvmf_tgt_poll_group_000", 00:12:13.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:13.909 "listen_address": { 00:12:13.909 "trtype": "TCP", 00:12:13.909 "adrfam": "IPv4", 00:12:13.909 "traddr": "10.0.0.3", 00:12:13.909 "trsvcid": "4420" 00:12:13.909 }, 00:12:13.909 "peer_address": { 00:12:13.909 "trtype": "TCP", 00:12:13.909 "adrfam": "IPv4", 00:12:13.909 "traddr": "10.0.0.1", 00:12:13.909 "trsvcid": "49358" 00:12:13.909 }, 00:12:13.909 "auth": { 00:12:13.909 "state": "completed", 00:12:13.909 "digest": "sha384", 00:12:13.909 "dhgroup": "null" 00:12:13.909 } 00:12:13.909 } 00:12:13.909 ]' 00:12:13.909 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.909 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:13.909 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.909 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:13.909 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.909 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.909 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.909 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.257 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:12:14.257 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:12:14.828 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.828 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:14.828 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.828 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.828 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.828 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:14.828 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.828 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:14.828 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:15.395 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:15.395 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.395 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:15.396 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:15.396 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:15.396 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.396 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.396 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.396 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.396 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.396 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.396 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.396 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.653 00:12:15.653 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.653 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.653 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.912 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.912 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.912 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.912 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.912 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.912 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.912 { 00:12:15.912 "cntlid": 57, 00:12:15.912 "qid": 0, 00:12:15.912 "state": "enabled", 00:12:15.912 "thread": "nvmf_tgt_poll_group_000", 00:12:15.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:15.912 "listen_address": { 00:12:15.912 "trtype": "TCP", 00:12:15.912 "adrfam": "IPv4", 00:12:15.912 "traddr": "10.0.0.3", 00:12:15.912 "trsvcid": "4420" 00:12:15.912 }, 00:12:15.912 "peer_address": { 00:12:15.912 "trtype": "TCP", 00:12:15.912 "adrfam": "IPv4", 00:12:15.912 "traddr": "10.0.0.1", 00:12:15.912 "trsvcid": "49376" 00:12:15.912 }, 00:12:15.912 "auth": { 00:12:15.912 "state": "completed", 00:12:15.912 "digest": "sha384", 00:12:15.912 "dhgroup": "ffdhe2048" 00:12:15.912 } 00:12:15.912 } 00:12:15.912 ]' 00:12:15.912 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.912 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:15.912 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.912 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:15.912 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.912 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.912 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.912 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.479 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:16.479 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:17.045 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.045 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:17.045 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.045 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.045 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.045 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.045 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:17.045 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:17.304 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:17.304 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.304 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:17.304 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:17.304 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:17.304 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.304 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.304 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.304 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.304 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.304 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.304 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.304 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.871 00:12:17.871 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.871 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.871 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.130 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.130 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.130 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.130 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.130 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.130 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.130 { 00:12:18.130 "cntlid": 59, 00:12:18.130 "qid": 0, 00:12:18.130 "state": "enabled", 00:12:18.130 "thread": "nvmf_tgt_poll_group_000", 00:12:18.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:18.130 "listen_address": { 00:12:18.130 "trtype": "TCP", 00:12:18.130 "adrfam": "IPv4", 00:12:18.130 "traddr": "10.0.0.3", 00:12:18.130 "trsvcid": "4420" 00:12:18.130 }, 00:12:18.130 "peer_address": { 00:12:18.130 "trtype": "TCP", 00:12:18.130 "adrfam": "IPv4", 00:12:18.130 "traddr": "10.0.0.1", 00:12:18.130 "trsvcid": "49404" 00:12:18.130 }, 00:12:18.130 "auth": { 00:12:18.130 "state": "completed", 00:12:18.130 "digest": "sha384", 00:12:18.130 "dhgroup": "ffdhe2048" 00:12:18.130 } 00:12:18.130 } 00:12:18.130 ]' 00:12:18.130 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.130 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:18.130 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.130 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:18.130 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.130 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.130 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.130 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.389 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:18.389 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:18.956 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.956 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:18.956 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.956 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.956 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.956 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.956 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:18.956 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:19.214 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:19.214 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.214 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:19.214 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:19.214 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:19.214 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.214 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.214 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.214 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.214 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.214 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.214 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.214 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.781 00:12:19.781 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.781 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.781 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.041 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.041 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.041 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.041 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.041 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.041 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.041 { 00:12:20.041 "cntlid": 61, 00:12:20.041 "qid": 0, 00:12:20.041 "state": "enabled", 00:12:20.041 "thread": "nvmf_tgt_poll_group_000", 00:12:20.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:20.041 "listen_address": { 00:12:20.041 "trtype": "TCP", 00:12:20.041 "adrfam": "IPv4", 00:12:20.041 "traddr": "10.0.0.3", 00:12:20.041 "trsvcid": "4420" 00:12:20.041 }, 00:12:20.041 "peer_address": { 00:12:20.041 "trtype": "TCP", 00:12:20.041 "adrfam": "IPv4", 00:12:20.041 "traddr": "10.0.0.1", 00:12:20.041 "trsvcid": "49442" 00:12:20.041 }, 00:12:20.041 "auth": { 00:12:20.041 "state": "completed", 00:12:20.041 "digest": "sha384", 00:12:20.041 "dhgroup": "ffdhe2048" 00:12:20.041 } 00:12:20.041 } 00:12:20.041 ]' 00:12:20.041 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.041 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:20.041 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.041 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:20.041 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.041 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.041 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.041 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.300 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:12:20.300 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:12:20.868 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.868 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:20.868 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.868 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.868 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.868 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.868 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:20.868 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:21.434 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:21.434 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.434 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:21.434 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:21.434 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:21.434 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.434 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:12:21.434 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.434 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.434 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.434 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:21.434 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:21.434 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:21.691 00:12:21.691 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.691 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.691 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.949 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.949 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.949 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.949 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.949 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.949 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.949 { 00:12:21.949 "cntlid": 63, 00:12:21.949 "qid": 0, 00:12:21.949 "state": "enabled", 00:12:21.949 "thread": "nvmf_tgt_poll_group_000", 00:12:21.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:21.949 "listen_address": { 00:12:21.949 "trtype": "TCP", 00:12:21.949 "adrfam": "IPv4", 00:12:21.949 "traddr": "10.0.0.3", 00:12:21.949 "trsvcid": "4420" 00:12:21.949 }, 00:12:21.949 "peer_address": { 00:12:21.949 "trtype": "TCP", 00:12:21.949 "adrfam": "IPv4", 00:12:21.949 "traddr": "10.0.0.1", 00:12:21.949 "trsvcid": "49468" 00:12:21.949 }, 00:12:21.949 "auth": { 00:12:21.949 "state": "completed", 00:12:21.949 "digest": "sha384", 00:12:21.949 "dhgroup": "ffdhe2048" 00:12:21.949 } 00:12:21.949 } 00:12:21.949 ]' 00:12:21.949 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.949 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.949 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.949 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:21.949 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.949 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.949 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.949 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.208 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:12:22.208 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.143 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.402 00:12:23.660 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.660 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.660 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.918 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.918 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.918 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.918 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.918 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.918 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.918 { 00:12:23.918 "cntlid": 65, 00:12:23.918 "qid": 0, 00:12:23.918 "state": "enabled", 00:12:23.918 "thread": "nvmf_tgt_poll_group_000", 00:12:23.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:23.918 "listen_address": { 00:12:23.918 "trtype": "TCP", 00:12:23.918 "adrfam": "IPv4", 00:12:23.918 "traddr": "10.0.0.3", 00:12:23.918 "trsvcid": "4420" 00:12:23.918 }, 00:12:23.918 "peer_address": { 00:12:23.918 "trtype": "TCP", 00:12:23.919 "adrfam": "IPv4", 00:12:23.919 "traddr": "10.0.0.1", 00:12:23.919 "trsvcid": "56102" 00:12:23.919 }, 00:12:23.919 "auth": { 00:12:23.919 "state": "completed", 00:12:23.919 "digest": "sha384", 00:12:23.919 "dhgroup": "ffdhe3072" 00:12:23.919 } 00:12:23.919 } 00:12:23.919 ]' 00:12:23.919 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.919 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.919 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.919 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:23.919 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.919 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.919 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.919 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.177 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:24.177 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:24.744 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.744 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:24.744 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.744 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.744 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.744 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.744 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:24.744 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:25.310 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:25.310 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.310 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:25.310 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:25.310 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:25.310 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.310 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.310 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.310 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.310 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.310 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.310 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.310 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.569 00:12:25.569 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.569 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.569 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.828 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.828 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.828 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.828 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.828 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.828 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.828 { 00:12:25.828 "cntlid": 67, 00:12:25.828 "qid": 0, 00:12:25.828 "state": "enabled", 00:12:25.828 "thread": "nvmf_tgt_poll_group_000", 00:12:25.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:25.828 "listen_address": { 00:12:25.828 "trtype": "TCP", 00:12:25.828 "adrfam": "IPv4", 00:12:25.828 "traddr": "10.0.0.3", 00:12:25.828 "trsvcid": "4420" 00:12:25.828 }, 00:12:25.828 "peer_address": { 00:12:25.828 "trtype": "TCP", 00:12:25.828 "adrfam": "IPv4", 00:12:25.828 "traddr": "10.0.0.1", 00:12:25.828 "trsvcid": "56132" 00:12:25.828 }, 00:12:25.828 "auth": { 00:12:25.828 "state": "completed", 00:12:25.828 "digest": "sha384", 00:12:25.828 "dhgroup": "ffdhe3072" 00:12:25.828 } 00:12:25.828 } 00:12:25.828 ]' 00:12:25.828 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.828 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:25.828 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.828 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:25.828 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.828 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.828 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.828 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.087 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:26.087 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.022 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.588 00:12:27.588 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.588 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.588 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.846 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.846 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.846 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.846 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.846 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.846 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.846 { 00:12:27.846 "cntlid": 69, 00:12:27.846 "qid": 0, 00:12:27.846 "state": "enabled", 00:12:27.846 "thread": "nvmf_tgt_poll_group_000", 00:12:27.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:27.846 "listen_address": { 00:12:27.846 "trtype": "TCP", 00:12:27.846 "adrfam": "IPv4", 00:12:27.846 "traddr": "10.0.0.3", 00:12:27.846 "trsvcid": "4420" 00:12:27.846 }, 00:12:27.846 "peer_address": { 00:12:27.846 "trtype": "TCP", 00:12:27.846 "adrfam": "IPv4", 00:12:27.846 "traddr": "10.0.0.1", 00:12:27.846 "trsvcid": "56162" 00:12:27.846 }, 00:12:27.846 "auth": { 00:12:27.846 "state": "completed", 00:12:27.846 "digest": "sha384", 00:12:27.846 "dhgroup": "ffdhe3072" 00:12:27.846 } 00:12:27.846 } 00:12:27.846 ]' 00:12:27.846 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.846 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:27.846 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.846 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:27.846 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.846 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.846 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.846 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.104 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:12:28.105 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:12:28.698 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.698 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:28.698 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.698 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.698 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.698 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.698 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:28.698 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:28.957 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:28.957 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.957 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:28.957 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:28.957 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:28.957 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.957 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:12:28.957 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.957 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.957 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.957 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:28.957 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.958 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:29.526 00:12:29.526 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.526 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.526 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.526 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.526 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.526 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.526 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.526 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.526 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.526 { 00:12:29.526 "cntlid": 71, 00:12:29.526 "qid": 0, 00:12:29.526 "state": "enabled", 00:12:29.526 "thread": "nvmf_tgt_poll_group_000", 00:12:29.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:29.526 "listen_address": { 00:12:29.526 "trtype": "TCP", 00:12:29.526 "adrfam": "IPv4", 00:12:29.526 "traddr": "10.0.0.3", 00:12:29.526 "trsvcid": "4420" 00:12:29.526 }, 00:12:29.526 "peer_address": { 00:12:29.526 "trtype": "TCP", 00:12:29.526 "adrfam": "IPv4", 00:12:29.526 "traddr": "10.0.0.1", 00:12:29.526 "trsvcid": "56184" 00:12:29.526 }, 00:12:29.526 "auth": { 00:12:29.526 "state": "completed", 00:12:29.526 "digest": "sha384", 00:12:29.526 "dhgroup": "ffdhe3072" 00:12:29.526 } 00:12:29.526 } 00:12:29.526 ]' 00:12:29.526 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.786 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:29.786 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.786 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:29.786 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.786 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.786 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.786 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.045 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:12:30.045 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:12:30.614 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.614 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:30.614 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.614 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.614 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.614 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:30.614 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.614 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:30.614 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:30.873 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:30.873 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.873 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:30.873 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:30.873 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:30.873 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.873 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.873 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.873 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.873 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.873 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.873 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.873 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.443 00:12:31.443 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.443 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.443 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.704 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.704 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.704 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.704 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.704 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.704 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.704 { 00:12:31.704 "cntlid": 73, 00:12:31.704 "qid": 0, 00:12:31.704 "state": "enabled", 00:12:31.704 "thread": "nvmf_tgt_poll_group_000", 00:12:31.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:31.704 "listen_address": { 00:12:31.704 "trtype": "TCP", 00:12:31.704 "adrfam": "IPv4", 00:12:31.704 "traddr": "10.0.0.3", 00:12:31.704 "trsvcid": "4420" 00:12:31.704 }, 00:12:31.704 "peer_address": { 00:12:31.704 "trtype": "TCP", 00:12:31.704 "adrfam": "IPv4", 00:12:31.704 "traddr": "10.0.0.1", 00:12:31.704 "trsvcid": "56204" 00:12:31.704 }, 00:12:31.704 "auth": { 00:12:31.704 "state": "completed", 00:12:31.704 "digest": "sha384", 00:12:31.704 "dhgroup": "ffdhe4096" 00:12:31.704 } 00:12:31.704 } 00:12:31.704 ]' 00:12:31.704 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.704 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.704 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.704 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:31.704 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.704 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.704 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.704 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.964 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:31.964 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:32.535 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.795 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:32.795 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.795 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.795 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.795 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.795 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:32.795 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:33.054 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:33.054 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.054 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:33.054 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:33.054 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:33.054 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.054 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.054 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.054 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.054 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.054 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.054 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.054 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.314 00:12:33.314 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.314 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.314 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.573 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.573 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.574 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.574 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.574 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.574 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.574 { 00:12:33.574 "cntlid": 75, 00:12:33.574 "qid": 0, 00:12:33.574 "state": "enabled", 00:12:33.574 "thread": "nvmf_tgt_poll_group_000", 00:12:33.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:33.574 "listen_address": { 00:12:33.574 "trtype": "TCP", 00:12:33.574 "adrfam": "IPv4", 00:12:33.574 "traddr": "10.0.0.3", 00:12:33.574 "trsvcid": "4420" 00:12:33.574 }, 00:12:33.574 "peer_address": { 00:12:33.574 "trtype": "TCP", 00:12:33.574 "adrfam": "IPv4", 00:12:33.574 "traddr": "10.0.0.1", 00:12:33.574 "trsvcid": "50478" 00:12:33.574 }, 00:12:33.574 "auth": { 00:12:33.574 "state": "completed", 00:12:33.574 "digest": "sha384", 00:12:33.574 "dhgroup": "ffdhe4096" 00:12:33.574 } 00:12:33.574 } 00:12:33.574 ]' 00:12:33.574 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.833 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:33.833 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.833 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:33.833 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.833 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.833 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.833 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.094 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:34.094 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:34.662 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.662 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:34.662 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.662 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.662 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.662 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.662 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:34.662 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:34.921 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:34.921 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.921 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:34.921 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:34.921 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:34.921 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.921 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.921 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.921 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.921 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.921 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.921 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.921 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.489 00:12:35.489 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.489 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.489 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.748 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.748 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.748 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.748 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.748 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.748 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.748 { 00:12:35.748 "cntlid": 77, 00:12:35.748 "qid": 0, 00:12:35.748 "state": "enabled", 00:12:35.748 "thread": "nvmf_tgt_poll_group_000", 00:12:35.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:35.748 "listen_address": { 00:12:35.748 "trtype": "TCP", 00:12:35.748 "adrfam": "IPv4", 00:12:35.748 "traddr": "10.0.0.3", 00:12:35.748 "trsvcid": "4420" 00:12:35.748 }, 00:12:35.748 "peer_address": { 00:12:35.748 "trtype": "TCP", 00:12:35.748 "adrfam": "IPv4", 00:12:35.748 "traddr": "10.0.0.1", 00:12:35.748 "trsvcid": "50496" 00:12:35.748 }, 00:12:35.748 "auth": { 00:12:35.748 "state": "completed", 00:12:35.748 "digest": "sha384", 00:12:35.748 "dhgroup": "ffdhe4096" 00:12:35.748 } 00:12:35.748 } 00:12:35.748 ]' 00:12:35.748 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.748 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:35.748 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.748 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:35.748 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.748 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.748 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.748 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.006 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:12:36.006 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:12:36.941 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.941 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:36.941 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.941 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.941 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.941 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.941 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:36.941 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:37.199 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:37.199 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.199 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:37.199 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:37.199 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:37.199 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.200 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:12:37.200 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.200 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.200 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.200 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:37.200 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:37.200 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:37.457 00:12:37.457 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.457 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.457 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.715 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.715 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.715 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.715 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.715 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.715 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.715 { 00:12:37.715 "cntlid": 79, 00:12:37.715 "qid": 0, 00:12:37.715 "state": "enabled", 00:12:37.715 "thread": "nvmf_tgt_poll_group_000", 00:12:37.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:37.715 "listen_address": { 00:12:37.715 "trtype": "TCP", 00:12:37.715 "adrfam": "IPv4", 00:12:37.715 "traddr": "10.0.0.3", 00:12:37.715 "trsvcid": "4420" 00:12:37.715 }, 00:12:37.715 "peer_address": { 00:12:37.715 "trtype": "TCP", 00:12:37.715 "adrfam": "IPv4", 00:12:37.715 "traddr": "10.0.0.1", 00:12:37.715 "trsvcid": "50526" 00:12:37.715 }, 00:12:37.715 "auth": { 00:12:37.715 "state": "completed", 00:12:37.715 "digest": "sha384", 00:12:37.715 "dhgroup": "ffdhe4096" 00:12:37.715 } 00:12:37.715 } 00:12:37.715 ]' 00:12:37.715 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.974 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:37.974 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.974 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:37.974 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.974 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.974 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.974 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.233 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:12:38.233 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:12:38.800 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.800 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:38.800 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.800 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.800 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.800 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:38.800 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.800 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:38.800 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:39.367 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:39.367 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.367 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:39.367 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:39.367 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:39.367 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.367 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.367 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.367 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.367 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.367 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.367 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.367 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.626 00:12:39.626 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.626 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.626 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.884 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.884 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.884 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.884 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.884 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.884 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.884 { 00:12:39.884 "cntlid": 81, 00:12:39.884 "qid": 0, 00:12:39.884 "state": "enabled", 00:12:39.884 "thread": "nvmf_tgt_poll_group_000", 00:12:39.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:39.884 "listen_address": { 00:12:39.884 "trtype": "TCP", 00:12:39.884 "adrfam": "IPv4", 00:12:39.884 "traddr": "10.0.0.3", 00:12:39.884 "trsvcid": "4420" 00:12:39.884 }, 00:12:39.884 "peer_address": { 00:12:39.884 "trtype": "TCP", 00:12:39.884 "adrfam": "IPv4", 00:12:39.884 "traddr": "10.0.0.1", 00:12:39.884 "trsvcid": "50558" 00:12:39.884 }, 00:12:39.884 "auth": { 00:12:39.884 "state": "completed", 00:12:39.884 "digest": "sha384", 00:12:39.884 "dhgroup": "ffdhe6144" 00:12:39.884 } 00:12:39.884 } 00:12:39.884 ]' 00:12:39.884 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.884 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:39.884 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.142 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:40.142 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.142 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.142 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.142 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.399 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:40.399 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:41.333 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.333 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:41.333 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.333 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.333 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.333 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.333 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:41.333 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:41.590 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:41.590 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.590 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:41.590 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:41.590 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:41.590 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.590 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.590 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.590 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.590 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.590 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.590 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.590 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.848 00:12:41.848 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.848 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.848 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.414 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.414 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.414 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.414 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.414 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.414 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.414 { 00:12:42.414 "cntlid": 83, 00:12:42.414 "qid": 0, 00:12:42.414 "state": "enabled", 00:12:42.414 "thread": "nvmf_tgt_poll_group_000", 00:12:42.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:42.414 "listen_address": { 00:12:42.414 "trtype": "TCP", 00:12:42.414 "adrfam": "IPv4", 00:12:42.414 "traddr": "10.0.0.3", 00:12:42.414 "trsvcid": "4420" 00:12:42.414 }, 00:12:42.414 "peer_address": { 00:12:42.414 "trtype": "TCP", 00:12:42.414 "adrfam": "IPv4", 00:12:42.414 "traddr": "10.0.0.1", 00:12:42.414 "trsvcid": "50582" 00:12:42.414 }, 00:12:42.414 "auth": { 00:12:42.414 "state": "completed", 00:12:42.414 "digest": "sha384", 00:12:42.414 "dhgroup": "ffdhe6144" 00:12:42.414 } 00:12:42.414 } 00:12:42.414 ]' 00:12:42.414 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.414 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.414 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.414 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:42.414 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.414 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.414 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.414 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.672 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:42.672 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:43.239 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.239 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:43.239 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.239 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.239 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.239 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.239 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:43.239 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:43.498 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:43.498 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.498 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:43.498 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:43.498 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:43.498 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.498 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.498 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.498 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.498 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.498 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.498 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.499 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.101 00:12:44.101 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.101 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.101 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.359 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.359 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.359 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.359 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.359 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.359 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.359 { 00:12:44.359 "cntlid": 85, 00:12:44.359 "qid": 0, 00:12:44.359 "state": "enabled", 00:12:44.359 "thread": "nvmf_tgt_poll_group_000", 00:12:44.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:44.359 "listen_address": { 00:12:44.359 "trtype": "TCP", 00:12:44.359 "adrfam": "IPv4", 00:12:44.359 "traddr": "10.0.0.3", 00:12:44.359 "trsvcid": "4420" 00:12:44.359 }, 00:12:44.359 "peer_address": { 00:12:44.359 "trtype": "TCP", 00:12:44.359 "adrfam": "IPv4", 00:12:44.359 "traddr": "10.0.0.1", 00:12:44.359 "trsvcid": "59994" 00:12:44.359 }, 00:12:44.359 "auth": { 00:12:44.359 "state": "completed", 00:12:44.359 "digest": "sha384", 00:12:44.359 "dhgroup": "ffdhe6144" 00:12:44.359 } 00:12:44.359 } 00:12:44.359 ]' 00:12:44.359 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.359 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:44.359 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.359 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:44.359 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.359 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.359 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.359 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.618 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:12:44.618 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:12:45.186 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.186 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:45.186 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.445 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.445 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.445 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.445 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:45.445 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:45.445 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:45.445 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.445 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:45.445 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:45.445 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:45.445 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.445 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:12:45.445 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.445 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.445 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.445 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:45.445 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:45.445 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:46.012 00:12:46.012 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.012 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.012 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.271 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.271 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.271 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.271 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.271 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.271 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.271 { 00:12:46.271 "cntlid": 87, 00:12:46.271 "qid": 0, 00:12:46.271 "state": "enabled", 00:12:46.271 "thread": "nvmf_tgt_poll_group_000", 00:12:46.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:46.271 "listen_address": { 00:12:46.271 "trtype": "TCP", 00:12:46.271 "adrfam": "IPv4", 00:12:46.271 "traddr": "10.0.0.3", 00:12:46.271 "trsvcid": "4420" 00:12:46.271 }, 00:12:46.271 "peer_address": { 00:12:46.271 "trtype": "TCP", 00:12:46.271 "adrfam": "IPv4", 00:12:46.271 "traddr": "10.0.0.1", 00:12:46.271 "trsvcid": "60030" 00:12:46.271 }, 00:12:46.271 "auth": { 00:12:46.271 "state": "completed", 00:12:46.271 "digest": "sha384", 00:12:46.271 "dhgroup": "ffdhe6144" 00:12:46.271 } 00:12:46.271 } 00:12:46.271 ]' 00:12:46.271 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.271 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:46.271 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.271 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:46.529 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.530 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.530 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.530 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.788 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:12:46.788 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:12:47.355 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.355 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:47.355 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.355 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.355 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.355 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:47.355 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.355 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:47.355 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:47.612 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:47.612 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.612 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:47.612 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:47.612 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:47.612 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.612 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.612 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.612 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.612 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.613 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.613 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.613 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.179 00:12:48.179 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.179 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.179 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.438 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.438 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.438 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.438 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.438 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.438 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.439 { 00:12:48.439 "cntlid": 89, 00:12:48.439 "qid": 0, 00:12:48.439 "state": "enabled", 00:12:48.439 "thread": "nvmf_tgt_poll_group_000", 00:12:48.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:48.439 "listen_address": { 00:12:48.439 "trtype": "TCP", 00:12:48.439 "adrfam": "IPv4", 00:12:48.439 "traddr": "10.0.0.3", 00:12:48.439 "trsvcid": "4420" 00:12:48.439 }, 00:12:48.439 "peer_address": { 00:12:48.439 "trtype": "TCP", 00:12:48.439 "adrfam": "IPv4", 00:12:48.439 "traddr": "10.0.0.1", 00:12:48.439 "trsvcid": "60058" 00:12:48.439 }, 00:12:48.439 "auth": { 00:12:48.439 "state": "completed", 00:12:48.439 "digest": "sha384", 00:12:48.439 "dhgroup": "ffdhe8192" 00:12:48.439 } 00:12:48.439 } 00:12:48.439 ]' 00:12:48.439 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.439 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:48.439 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.439 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:48.439 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.439 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.439 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.439 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.006 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:49.006 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:49.574 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.574 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:49.574 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.574 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.574 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.574 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.575 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:49.575 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:49.833 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:49.833 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.833 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:49.833 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:49.833 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:49.833 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.833 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.833 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.833 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.833 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.833 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.833 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.833 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.401 00:12:50.401 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.401 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.401 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.660 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.660 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.660 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.660 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.660 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.660 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.660 { 00:12:50.660 "cntlid": 91, 00:12:50.660 "qid": 0, 00:12:50.660 "state": "enabled", 00:12:50.660 "thread": "nvmf_tgt_poll_group_000", 00:12:50.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:50.660 "listen_address": { 00:12:50.660 "trtype": "TCP", 00:12:50.660 "adrfam": "IPv4", 00:12:50.660 "traddr": "10.0.0.3", 00:12:50.660 "trsvcid": "4420" 00:12:50.660 }, 00:12:50.660 "peer_address": { 00:12:50.660 "trtype": "TCP", 00:12:50.660 "adrfam": "IPv4", 00:12:50.660 "traddr": "10.0.0.1", 00:12:50.660 "trsvcid": "60084" 00:12:50.660 }, 00:12:50.660 "auth": { 00:12:50.660 "state": "completed", 00:12:50.660 "digest": "sha384", 00:12:50.660 "dhgroup": "ffdhe8192" 00:12:50.660 } 00:12:50.660 } 00:12:50.660 ]' 00:12:50.660 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.660 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:50.660 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.660 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:50.660 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.660 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.660 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.660 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.228 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:51.228 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:51.796 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.796 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:51.796 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.796 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.796 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.796 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.796 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:51.796 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:52.054 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:52.054 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.054 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:52.054 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:52.054 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:52.054 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.054 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.054 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.054 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.054 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.054 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.054 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.054 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.622 00:12:52.622 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.622 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.622 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.881 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.882 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.882 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.882 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.882 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.882 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.882 { 00:12:52.882 "cntlid": 93, 00:12:52.882 "qid": 0, 00:12:52.882 "state": "enabled", 00:12:52.882 "thread": "nvmf_tgt_poll_group_000", 00:12:52.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:52.882 "listen_address": { 00:12:52.882 "trtype": "TCP", 00:12:52.882 "adrfam": "IPv4", 00:12:52.882 "traddr": "10.0.0.3", 00:12:52.882 "trsvcid": "4420" 00:12:52.882 }, 00:12:52.882 "peer_address": { 00:12:52.882 "trtype": "TCP", 00:12:52.882 "adrfam": "IPv4", 00:12:52.882 "traddr": "10.0.0.1", 00:12:52.882 "trsvcid": "47022" 00:12:52.882 }, 00:12:52.882 "auth": { 00:12:52.882 "state": "completed", 00:12:52.882 "digest": "sha384", 00:12:52.882 "dhgroup": "ffdhe8192" 00:12:52.882 } 00:12:52.882 } 00:12:52.882 ]' 00:12:52.882 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.882 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:52.882 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.140 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:53.140 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.140 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.140 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.140 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.399 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:12:53.399 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:12:53.967 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.967 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:53.967 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.967 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.967 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.967 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.967 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:53.967 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:54.226 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:54.226 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.226 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:54.226 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:54.226 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:54.226 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.226 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:12:54.226 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.226 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.226 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.226 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:54.226 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.226 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.795 00:12:54.795 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.795 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.795 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.054 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.054 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.054 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.054 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.054 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.054 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.054 { 00:12:55.054 "cntlid": 95, 00:12:55.054 "qid": 0, 00:12:55.054 "state": "enabled", 00:12:55.054 "thread": "nvmf_tgt_poll_group_000", 00:12:55.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:55.054 "listen_address": { 00:12:55.054 "trtype": "TCP", 00:12:55.054 "adrfam": "IPv4", 00:12:55.054 "traddr": "10.0.0.3", 00:12:55.054 "trsvcid": "4420" 00:12:55.054 }, 00:12:55.054 "peer_address": { 00:12:55.054 "trtype": "TCP", 00:12:55.054 "adrfam": "IPv4", 00:12:55.054 "traddr": "10.0.0.1", 00:12:55.054 "trsvcid": "47038" 00:12:55.054 }, 00:12:55.054 "auth": { 00:12:55.054 "state": "completed", 00:12:55.054 "digest": "sha384", 00:12:55.054 "dhgroup": "ffdhe8192" 00:12:55.054 } 00:12:55.054 } 00:12:55.054 ]' 00:12:55.054 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.054 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:55.054 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.314 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:55.314 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.314 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.314 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.314 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.573 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:12:55.573 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:12:56.138 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.138 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:56.138 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.138 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.138 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.138 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:56.138 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:56.138 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.138 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:56.138 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:56.395 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:56.395 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.395 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:56.395 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:56.395 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:56.395 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.395 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.395 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.395 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.395 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.395 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.395 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.395 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.653 00:12:56.911 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.911 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.911 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.169 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.169 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.169 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.169 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.169 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.169 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.169 { 00:12:57.169 "cntlid": 97, 00:12:57.169 "qid": 0, 00:12:57.169 "state": "enabled", 00:12:57.169 "thread": "nvmf_tgt_poll_group_000", 00:12:57.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:57.169 "listen_address": { 00:12:57.169 "trtype": "TCP", 00:12:57.169 "adrfam": "IPv4", 00:12:57.169 "traddr": "10.0.0.3", 00:12:57.169 "trsvcid": "4420" 00:12:57.169 }, 00:12:57.169 "peer_address": { 00:12:57.169 "trtype": "TCP", 00:12:57.169 "adrfam": "IPv4", 00:12:57.169 "traddr": "10.0.0.1", 00:12:57.169 "trsvcid": "47082" 00:12:57.169 }, 00:12:57.169 "auth": { 00:12:57.169 "state": "completed", 00:12:57.169 "digest": "sha512", 00:12:57.169 "dhgroup": "null" 00:12:57.169 } 00:12:57.169 } 00:12:57.169 ]' 00:12:57.169 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.169 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.169 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.169 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:57.169 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.169 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.169 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.169 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.427 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:57.428 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:12:58.388 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.388 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:12:58.388 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.388 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.388 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.388 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.388 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:58.388 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:58.685 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:58.685 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.685 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:58.685 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:58.685 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:58.685 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.685 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.685 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.685 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.685 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.685 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.685 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.685 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.944 00:12:58.944 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.944 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.944 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.203 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.203 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.203 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.203 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.203 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.203 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.203 { 00:12:59.203 "cntlid": 99, 00:12:59.203 "qid": 0, 00:12:59.203 "state": "enabled", 00:12:59.203 "thread": "nvmf_tgt_poll_group_000", 00:12:59.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:12:59.203 "listen_address": { 00:12:59.203 "trtype": "TCP", 00:12:59.203 "adrfam": "IPv4", 00:12:59.203 "traddr": "10.0.0.3", 00:12:59.203 "trsvcid": "4420" 00:12:59.203 }, 00:12:59.203 "peer_address": { 00:12:59.203 "trtype": "TCP", 00:12:59.203 "adrfam": "IPv4", 00:12:59.203 "traddr": "10.0.0.1", 00:12:59.203 "trsvcid": "47114" 00:12:59.203 }, 00:12:59.203 "auth": { 00:12:59.203 "state": "completed", 00:12:59.203 "digest": "sha512", 00:12:59.203 "dhgroup": "null" 00:12:59.203 } 00:12:59.203 } 00:12:59.203 ]' 00:12:59.203 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.203 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.203 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.203 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:59.203 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.462 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.462 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.462 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.720 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:12:59.720 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:13:00.287 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.287 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:00.287 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.287 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.287 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.287 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.287 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:00.287 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:00.546 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:13:00.546 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.546 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.546 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:00.546 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:00.546 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.546 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.546 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.546 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.546 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.546 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.546 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.546 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.805 00:13:00.805 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.805 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.805 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.063 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.063 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.063 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.063 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.063 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.063 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.063 { 00:13:01.063 "cntlid": 101, 00:13:01.063 "qid": 0, 00:13:01.063 "state": "enabled", 00:13:01.063 "thread": "nvmf_tgt_poll_group_000", 00:13:01.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:01.063 "listen_address": { 00:13:01.063 "trtype": "TCP", 00:13:01.063 "adrfam": "IPv4", 00:13:01.063 "traddr": "10.0.0.3", 00:13:01.063 "trsvcid": "4420" 00:13:01.063 }, 00:13:01.063 "peer_address": { 00:13:01.063 "trtype": "TCP", 00:13:01.063 "adrfam": "IPv4", 00:13:01.063 "traddr": "10.0.0.1", 00:13:01.063 "trsvcid": "47146" 00:13:01.063 }, 00:13:01.063 "auth": { 00:13:01.063 "state": "completed", 00:13:01.063 "digest": "sha512", 00:13:01.063 "dhgroup": "null" 00:13:01.063 } 00:13:01.063 } 00:13:01.063 ]' 00:13:01.063 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.063 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.063 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.322 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:01.322 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.322 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.322 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.322 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.579 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:13:01.579 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:13:02.146 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.146 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:02.146 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.146 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.146 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.146 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.146 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:02.146 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:02.405 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:13:02.405 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.405 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:02.405 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:02.405 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:02.405 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.405 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:13:02.405 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.405 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.405 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.405 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:02.405 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.405 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.972 00:13:02.972 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.972 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.972 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.972 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.972 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.972 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.972 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.972 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.972 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.972 { 00:13:02.972 "cntlid": 103, 00:13:02.972 "qid": 0, 00:13:02.972 "state": "enabled", 00:13:02.972 "thread": "nvmf_tgt_poll_group_000", 00:13:02.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:02.972 "listen_address": { 00:13:02.972 "trtype": "TCP", 00:13:02.972 "adrfam": "IPv4", 00:13:02.972 "traddr": "10.0.0.3", 00:13:02.972 "trsvcid": "4420" 00:13:02.972 }, 00:13:02.972 "peer_address": { 00:13:02.972 "trtype": "TCP", 00:13:02.972 "adrfam": "IPv4", 00:13:02.972 "traddr": "10.0.0.1", 00:13:02.972 "trsvcid": "43702" 00:13:02.972 }, 00:13:02.972 "auth": { 00:13:02.972 "state": "completed", 00:13:02.972 "digest": "sha512", 00:13:02.972 "dhgroup": "null" 00:13:02.972 } 00:13:02.972 } 00:13:02.972 ]' 00:13:02.972 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.230 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.230 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.230 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:03.230 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.230 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.230 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.230 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.488 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:03.488 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:04.056 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.056 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:04.056 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.056 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.056 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.056 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:04.056 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:04.056 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:04.056 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:04.315 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:04.315 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.315 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:04.315 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:04.315 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:04.315 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.315 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.315 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.315 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.315 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.315 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.315 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.315 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.882 00:13:04.882 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.882 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.882 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.142 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.143 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.143 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.143 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.143 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.143 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:05.143 { 00:13:05.143 "cntlid": 105, 00:13:05.143 "qid": 0, 00:13:05.143 "state": "enabled", 00:13:05.143 "thread": "nvmf_tgt_poll_group_000", 00:13:05.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:05.143 "listen_address": { 00:13:05.143 "trtype": "TCP", 00:13:05.143 "adrfam": "IPv4", 00:13:05.143 "traddr": "10.0.0.3", 00:13:05.143 "trsvcid": "4420" 00:13:05.143 }, 00:13:05.143 "peer_address": { 00:13:05.143 "trtype": "TCP", 00:13:05.143 "adrfam": "IPv4", 00:13:05.143 "traddr": "10.0.0.1", 00:13:05.143 "trsvcid": "43736" 00:13:05.143 }, 00:13:05.143 "auth": { 00:13:05.143 "state": "completed", 00:13:05.143 "digest": "sha512", 00:13:05.143 "dhgroup": "ffdhe2048" 00:13:05.143 } 00:13:05.143 } 00:13:05.143 ]' 00:13:05.143 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:05.143 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:05.143 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:05.143 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:05.143 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.143 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.143 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.143 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.402 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:13:05.402 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:13:06.338 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.338 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:06.338 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.338 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.338 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.338 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.338 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:06.338 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:06.338 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:06.338 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.338 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:06.338 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:06.338 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:06.338 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.338 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.338 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.338 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.338 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.338 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.338 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.339 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.904 00:13:06.904 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.904 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.904 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.163 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.163 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.163 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.163 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.163 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.163 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.163 { 00:13:07.163 "cntlid": 107, 00:13:07.163 "qid": 0, 00:13:07.163 "state": "enabled", 00:13:07.163 "thread": "nvmf_tgt_poll_group_000", 00:13:07.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:07.163 "listen_address": { 00:13:07.163 "trtype": "TCP", 00:13:07.163 "adrfam": "IPv4", 00:13:07.163 "traddr": "10.0.0.3", 00:13:07.163 "trsvcid": "4420" 00:13:07.163 }, 00:13:07.163 "peer_address": { 00:13:07.163 "trtype": "TCP", 00:13:07.163 "adrfam": "IPv4", 00:13:07.163 "traddr": "10.0.0.1", 00:13:07.163 "trsvcid": "43760" 00:13:07.163 }, 00:13:07.163 "auth": { 00:13:07.163 "state": "completed", 00:13:07.163 "digest": "sha512", 00:13:07.163 "dhgroup": "ffdhe2048" 00:13:07.163 } 00:13:07.163 } 00:13:07.163 ]' 00:13:07.163 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.163 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:07.163 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.163 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:07.163 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.163 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.163 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.163 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.422 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:13:07.422 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:13:08.358 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.358 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:08.358 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.358 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.358 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.358 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.358 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:08.358 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:08.358 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:08.358 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.358 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:08.358 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:08.358 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:08.358 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.358 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.358 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.358 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.617 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.617 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.617 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.617 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.876 00:13:08.876 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.876 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.876 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.135 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.135 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.135 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.135 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.135 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.135 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.135 { 00:13:09.135 "cntlid": 109, 00:13:09.135 "qid": 0, 00:13:09.135 "state": "enabled", 00:13:09.135 "thread": "nvmf_tgt_poll_group_000", 00:13:09.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:09.135 "listen_address": { 00:13:09.135 "trtype": "TCP", 00:13:09.135 "adrfam": "IPv4", 00:13:09.135 "traddr": "10.0.0.3", 00:13:09.135 "trsvcid": "4420" 00:13:09.135 }, 00:13:09.135 "peer_address": { 00:13:09.135 "trtype": "TCP", 00:13:09.135 "adrfam": "IPv4", 00:13:09.135 "traddr": "10.0.0.1", 00:13:09.135 "trsvcid": "43790" 00:13:09.135 }, 00:13:09.135 "auth": { 00:13:09.135 "state": "completed", 00:13:09.135 "digest": "sha512", 00:13:09.135 "dhgroup": "ffdhe2048" 00:13:09.135 } 00:13:09.135 } 00:13:09.135 ]' 00:13:09.135 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.135 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:09.135 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.394 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:09.394 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.394 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.394 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.394 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.653 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:13:09.653 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:13:10.221 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.221 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:10.221 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.221 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.221 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.221 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.221 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:10.221 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:10.481 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:10.481 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.481 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:10.481 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:10.481 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:10.481 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.481 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:13:10.481 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.481 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.481 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.481 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:10.481 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:10.481 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:10.740 00:13:10.999 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.999 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.999 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.289 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.289 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.289 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.289 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.289 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.289 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.289 { 00:13:11.289 "cntlid": 111, 00:13:11.289 "qid": 0, 00:13:11.289 "state": "enabled", 00:13:11.289 "thread": "nvmf_tgt_poll_group_000", 00:13:11.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:11.289 "listen_address": { 00:13:11.289 "trtype": "TCP", 00:13:11.289 "adrfam": "IPv4", 00:13:11.289 "traddr": "10.0.0.3", 00:13:11.289 "trsvcid": "4420" 00:13:11.289 }, 00:13:11.289 "peer_address": { 00:13:11.289 "trtype": "TCP", 00:13:11.289 "adrfam": "IPv4", 00:13:11.289 "traddr": "10.0.0.1", 00:13:11.289 "trsvcid": "43814" 00:13:11.289 }, 00:13:11.289 "auth": { 00:13:11.289 "state": "completed", 00:13:11.289 "digest": "sha512", 00:13:11.289 "dhgroup": "ffdhe2048" 00:13:11.289 } 00:13:11.289 } 00:13:11.289 ]' 00:13:11.289 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.289 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.289 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.289 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:11.289 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.290 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.290 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.290 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.548 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:11.548 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:12.117 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.117 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:12.117 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.117 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.117 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.117 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:12.117 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.117 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:12.117 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:12.685 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:12.685 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.685 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:12.685 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:12.685 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:12.685 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.685 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.685 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.685 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.685 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.685 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.685 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.685 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.944 00:13:12.944 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.944 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.944 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.203 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.203 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.203 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.203 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.203 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.203 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.203 { 00:13:13.203 "cntlid": 113, 00:13:13.203 "qid": 0, 00:13:13.203 "state": "enabled", 00:13:13.203 "thread": "nvmf_tgt_poll_group_000", 00:13:13.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:13.203 "listen_address": { 00:13:13.203 "trtype": "TCP", 00:13:13.203 "adrfam": "IPv4", 00:13:13.203 "traddr": "10.0.0.3", 00:13:13.203 "trsvcid": "4420" 00:13:13.203 }, 00:13:13.203 "peer_address": { 00:13:13.203 "trtype": "TCP", 00:13:13.203 "adrfam": "IPv4", 00:13:13.203 "traddr": "10.0.0.1", 00:13:13.203 "trsvcid": "39084" 00:13:13.203 }, 00:13:13.203 "auth": { 00:13:13.203 "state": "completed", 00:13:13.203 "digest": "sha512", 00:13:13.203 "dhgroup": "ffdhe3072" 00:13:13.203 } 00:13:13.203 } 00:13:13.203 ]' 00:13:13.203 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.203 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.203 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.463 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:13.463 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.463 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.463 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.463 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.721 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:13:13.721 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:13:14.290 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.290 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:14.290 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.290 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.290 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.290 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.290 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:14.290 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:14.549 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:14.549 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.549 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:14.549 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:14.549 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:14.549 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.549 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.549 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.549 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.549 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.549 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.550 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.550 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.117 00:13:15.117 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.117 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.117 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.376 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.376 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.376 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.376 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.376 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.376 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.376 { 00:13:15.376 "cntlid": 115, 00:13:15.376 "qid": 0, 00:13:15.376 "state": "enabled", 00:13:15.376 "thread": "nvmf_tgt_poll_group_000", 00:13:15.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:15.376 "listen_address": { 00:13:15.376 "trtype": "TCP", 00:13:15.376 "adrfam": "IPv4", 00:13:15.376 "traddr": "10.0.0.3", 00:13:15.376 "trsvcid": "4420" 00:13:15.376 }, 00:13:15.376 "peer_address": { 00:13:15.376 "trtype": "TCP", 00:13:15.376 "adrfam": "IPv4", 00:13:15.376 "traddr": "10.0.0.1", 00:13:15.376 "trsvcid": "39118" 00:13:15.376 }, 00:13:15.376 "auth": { 00:13:15.376 "state": "completed", 00:13:15.376 "digest": "sha512", 00:13:15.376 "dhgroup": "ffdhe3072" 00:13:15.376 } 00:13:15.376 } 00:13:15.376 ]' 00:13:15.376 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.376 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:15.376 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.376 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:15.376 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.376 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.376 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.376 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.635 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:13:15.635 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:13:16.572 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.572 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:16.572 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.572 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.572 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.572 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.572 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:16.572 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:16.831 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:16.831 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.831 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:16.831 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:16.831 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:16.831 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.831 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.831 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.831 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.831 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.831 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.831 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.831 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.089 00:13:17.089 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.089 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.089 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.347 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.347 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.347 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.347 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.347 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.347 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.347 { 00:13:17.347 "cntlid": 117, 00:13:17.347 "qid": 0, 00:13:17.347 "state": "enabled", 00:13:17.347 "thread": "nvmf_tgt_poll_group_000", 00:13:17.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:17.347 "listen_address": { 00:13:17.347 "trtype": "TCP", 00:13:17.347 "adrfam": "IPv4", 00:13:17.347 "traddr": "10.0.0.3", 00:13:17.347 "trsvcid": "4420" 00:13:17.347 }, 00:13:17.347 "peer_address": { 00:13:17.347 "trtype": "TCP", 00:13:17.347 "adrfam": "IPv4", 00:13:17.347 "traddr": "10.0.0.1", 00:13:17.347 "trsvcid": "39132" 00:13:17.347 }, 00:13:17.347 "auth": { 00:13:17.347 "state": "completed", 00:13:17.347 "digest": "sha512", 00:13:17.347 "dhgroup": "ffdhe3072" 00:13:17.347 } 00:13:17.347 } 00:13:17.347 ]' 00:13:17.347 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.604 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.604 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.604 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:17.604 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.604 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.604 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.604 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.863 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:13:17.863 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:13:18.431 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.431 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:18.431 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.431 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.431 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.431 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.431 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:18.431 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:18.690 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:18.690 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.690 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:18.690 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:18.690 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:18.690 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.690 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:13:18.690 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.690 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.690 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.690 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:18.690 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.690 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:19.258 00:13:19.258 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.258 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.258 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.518 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.518 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.518 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.518 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.518 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.518 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.518 { 00:13:19.518 "cntlid": 119, 00:13:19.518 "qid": 0, 00:13:19.518 "state": "enabled", 00:13:19.518 "thread": "nvmf_tgt_poll_group_000", 00:13:19.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:19.518 "listen_address": { 00:13:19.518 "trtype": "TCP", 00:13:19.518 "adrfam": "IPv4", 00:13:19.518 "traddr": "10.0.0.3", 00:13:19.518 "trsvcid": "4420" 00:13:19.518 }, 00:13:19.518 "peer_address": { 00:13:19.518 "trtype": "TCP", 00:13:19.518 "adrfam": "IPv4", 00:13:19.518 "traddr": "10.0.0.1", 00:13:19.518 "trsvcid": "39172" 00:13:19.518 }, 00:13:19.518 "auth": { 00:13:19.518 "state": "completed", 00:13:19.518 "digest": "sha512", 00:13:19.518 "dhgroup": "ffdhe3072" 00:13:19.518 } 00:13:19.518 } 00:13:19.518 ]' 00:13:19.518 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.518 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.518 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.518 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:19.518 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.518 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.518 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.518 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.776 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:19.777 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:20.345 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.345 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:20.345 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.345 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.345 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.345 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:20.345 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.345 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:20.345 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:20.604 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:20.604 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.604 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:20.604 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:20.604 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:20.604 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.604 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.604 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.604 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.604 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.604 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.604 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.604 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.170 00:13:21.170 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.170 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.170 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.429 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.429 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.429 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.429 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.429 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.429 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.429 { 00:13:21.429 "cntlid": 121, 00:13:21.429 "qid": 0, 00:13:21.429 "state": "enabled", 00:13:21.429 "thread": "nvmf_tgt_poll_group_000", 00:13:21.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:21.429 "listen_address": { 00:13:21.429 "trtype": "TCP", 00:13:21.429 "adrfam": "IPv4", 00:13:21.429 "traddr": "10.0.0.3", 00:13:21.429 "trsvcid": "4420" 00:13:21.429 }, 00:13:21.429 "peer_address": { 00:13:21.429 "trtype": "TCP", 00:13:21.429 "adrfam": "IPv4", 00:13:21.429 "traddr": "10.0.0.1", 00:13:21.429 "trsvcid": "39194" 00:13:21.429 }, 00:13:21.429 "auth": { 00:13:21.429 "state": "completed", 00:13:21.429 "digest": "sha512", 00:13:21.429 "dhgroup": "ffdhe4096" 00:13:21.429 } 00:13:21.429 } 00:13:21.429 ]' 00:13:21.429 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.429 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.429 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.689 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:21.689 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.689 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.689 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.689 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.948 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:13:21.948 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.884 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.452 00:13:23.452 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.452 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.452 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.711 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.711 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.711 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.711 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.711 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.711 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.711 { 00:13:23.711 "cntlid": 123, 00:13:23.711 "qid": 0, 00:13:23.711 "state": "enabled", 00:13:23.711 "thread": "nvmf_tgt_poll_group_000", 00:13:23.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:23.711 "listen_address": { 00:13:23.711 "trtype": "TCP", 00:13:23.711 "adrfam": "IPv4", 00:13:23.711 "traddr": "10.0.0.3", 00:13:23.711 "trsvcid": "4420" 00:13:23.711 }, 00:13:23.711 "peer_address": { 00:13:23.711 "trtype": "TCP", 00:13:23.711 "adrfam": "IPv4", 00:13:23.711 "traddr": "10.0.0.1", 00:13:23.711 "trsvcid": "59192" 00:13:23.711 }, 00:13:23.711 "auth": { 00:13:23.711 "state": "completed", 00:13:23.711 "digest": "sha512", 00:13:23.711 "dhgroup": "ffdhe4096" 00:13:23.711 } 00:13:23.711 } 00:13:23.711 ]' 00:13:23.711 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.711 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.711 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.711 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:23.711 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.971 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.971 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.971 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.240 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:13:24.240 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:13:24.822 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.822 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:24.822 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.822 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.822 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.822 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.822 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:24.822 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:25.081 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:25.081 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.081 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:25.081 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:25.081 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:25.081 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.081 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.081 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.081 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.081 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.081 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.081 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.081 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.650 00:13:25.650 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.650 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.650 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.910 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.910 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.910 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.910 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.910 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.910 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.910 { 00:13:25.910 "cntlid": 125, 00:13:25.910 "qid": 0, 00:13:25.910 "state": "enabled", 00:13:25.910 "thread": "nvmf_tgt_poll_group_000", 00:13:25.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:25.910 "listen_address": { 00:13:25.910 "trtype": "TCP", 00:13:25.910 "adrfam": "IPv4", 00:13:25.910 "traddr": "10.0.0.3", 00:13:25.910 "trsvcid": "4420" 00:13:25.910 }, 00:13:25.910 "peer_address": { 00:13:25.910 "trtype": "TCP", 00:13:25.910 "adrfam": "IPv4", 00:13:25.910 "traddr": "10.0.0.1", 00:13:25.910 "trsvcid": "59214" 00:13:25.910 }, 00:13:25.910 "auth": { 00:13:25.910 "state": "completed", 00:13:25.910 "digest": "sha512", 00:13:25.910 "dhgroup": "ffdhe4096" 00:13:25.910 } 00:13:25.910 } 00:13:25.910 ]' 00:13:25.910 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.910 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.910 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.910 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:25.910 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.910 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.910 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.910 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.169 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:13:26.169 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:13:27.107 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.107 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:27.107 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.107 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.107 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.107 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.107 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:27.107 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:27.367 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:27.367 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.367 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:27.367 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:27.367 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:27.367 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.367 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:13:27.367 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.367 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.367 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.367 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:27.367 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.367 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.626 00:13:27.626 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.626 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.626 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.885 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.885 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.885 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.885 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.885 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.885 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.885 { 00:13:27.885 "cntlid": 127, 00:13:27.885 "qid": 0, 00:13:27.885 "state": "enabled", 00:13:27.885 "thread": "nvmf_tgt_poll_group_000", 00:13:27.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:27.885 "listen_address": { 00:13:27.885 "trtype": "TCP", 00:13:27.885 "adrfam": "IPv4", 00:13:27.885 "traddr": "10.0.0.3", 00:13:27.885 "trsvcid": "4420" 00:13:27.885 }, 00:13:27.885 "peer_address": { 00:13:27.885 "trtype": "TCP", 00:13:27.885 "adrfam": "IPv4", 00:13:27.885 "traddr": "10.0.0.1", 00:13:27.885 "trsvcid": "59242" 00:13:27.885 }, 00:13:27.885 "auth": { 00:13:27.885 "state": "completed", 00:13:27.885 "digest": "sha512", 00:13:27.885 "dhgroup": "ffdhe4096" 00:13:27.885 } 00:13:27.885 } 00:13:27.885 ]' 00:13:27.885 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.885 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:27.885 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.145 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:28.145 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.145 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.145 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.145 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:28.404 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:28.972 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.972 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:28.972 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.972 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.972 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.972 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:28.972 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.972 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:28.972 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:29.231 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:29.231 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.231 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:29.231 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:29.231 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:29.231 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.231 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.231 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.231 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.232 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.232 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.232 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.232 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.799 00:13:29.799 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.799 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.799 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.058 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.058 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.058 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.058 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.058 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.058 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.058 { 00:13:30.058 "cntlid": 129, 00:13:30.058 "qid": 0, 00:13:30.058 "state": "enabled", 00:13:30.058 "thread": "nvmf_tgt_poll_group_000", 00:13:30.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:30.058 "listen_address": { 00:13:30.058 "trtype": "TCP", 00:13:30.058 "adrfam": "IPv4", 00:13:30.058 "traddr": "10.0.0.3", 00:13:30.058 "trsvcid": "4420" 00:13:30.058 }, 00:13:30.058 "peer_address": { 00:13:30.058 "trtype": "TCP", 00:13:30.058 "adrfam": "IPv4", 00:13:30.058 "traddr": "10.0.0.1", 00:13:30.058 "trsvcid": "59270" 00:13:30.058 }, 00:13:30.058 "auth": { 00:13:30.058 "state": "completed", 00:13:30.058 "digest": "sha512", 00:13:30.058 "dhgroup": "ffdhe6144" 00:13:30.058 } 00:13:30.058 } 00:13:30.058 ]' 00:13:30.058 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.058 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:30.058 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.059 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:30.059 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.317 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.317 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.317 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.577 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:13:30.577 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:13:31.145 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.145 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:31.145 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.145 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.145 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.145 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.145 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:31.145 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:31.405 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:31.405 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.405 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:31.405 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:31.405 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:31.405 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.405 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.405 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.405 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.405 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.405 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.405 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.405 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.970 00:13:31.970 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.970 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.970 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.970 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.970 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.970 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.970 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.970 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.229 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.229 { 00:13:32.229 "cntlid": 131, 00:13:32.229 "qid": 0, 00:13:32.229 "state": "enabled", 00:13:32.229 "thread": "nvmf_tgt_poll_group_000", 00:13:32.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:32.229 "listen_address": { 00:13:32.229 "trtype": "TCP", 00:13:32.229 "adrfam": "IPv4", 00:13:32.229 "traddr": "10.0.0.3", 00:13:32.229 "trsvcid": "4420" 00:13:32.229 }, 00:13:32.229 "peer_address": { 00:13:32.229 "trtype": "TCP", 00:13:32.229 "adrfam": "IPv4", 00:13:32.229 "traddr": "10.0.0.1", 00:13:32.229 "trsvcid": "59294" 00:13:32.229 }, 00:13:32.229 "auth": { 00:13:32.229 "state": "completed", 00:13:32.229 "digest": "sha512", 00:13:32.229 "dhgroup": "ffdhe6144" 00:13:32.229 } 00:13:32.229 } 00:13:32.229 ]' 00:13:32.229 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.229 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:32.229 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.229 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:32.229 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.229 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.229 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.229 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.487 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:13:32.487 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:13:33.423 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.423 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:33.423 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.423 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.423 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.423 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.423 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:33.423 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:33.681 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:33.681 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.681 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:33.681 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:33.681 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:33.681 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.681 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.681 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.681 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.681 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.681 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.681 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.681 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.940 00:13:33.940 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.940 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.940 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.198 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.198 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.199 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.199 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.199 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.199 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.199 { 00:13:34.199 "cntlid": 133, 00:13:34.199 "qid": 0, 00:13:34.199 "state": "enabled", 00:13:34.199 "thread": "nvmf_tgt_poll_group_000", 00:13:34.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:34.199 "listen_address": { 00:13:34.199 "trtype": "TCP", 00:13:34.199 "adrfam": "IPv4", 00:13:34.199 "traddr": "10.0.0.3", 00:13:34.199 "trsvcid": "4420" 00:13:34.199 }, 00:13:34.199 "peer_address": { 00:13:34.199 "trtype": "TCP", 00:13:34.199 "adrfam": "IPv4", 00:13:34.199 "traddr": "10.0.0.1", 00:13:34.199 "trsvcid": "38594" 00:13:34.199 }, 00:13:34.199 "auth": { 00:13:34.199 "state": "completed", 00:13:34.199 "digest": "sha512", 00:13:34.199 "dhgroup": "ffdhe6144" 00:13:34.199 } 00:13:34.199 } 00:13:34.199 ]' 00:13:34.199 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.457 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:34.458 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.458 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:34.458 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.458 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.458 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.458 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.716 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:13:34.716 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:13:35.283 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.283 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:35.283 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.283 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.283 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.283 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.284 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:35.284 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:35.543 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:35.543 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.543 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:35.543 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:35.543 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:35.543 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.543 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:13:35.543 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.543 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.543 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.543 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:35.543 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.543 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.803 00:13:35.803 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.803 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.803 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.062 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.062 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.062 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.062 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.062 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.062 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:36.062 { 00:13:36.062 "cntlid": 135, 00:13:36.062 "qid": 0, 00:13:36.062 "state": "enabled", 00:13:36.062 "thread": "nvmf_tgt_poll_group_000", 00:13:36.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:36.062 "listen_address": { 00:13:36.062 "trtype": "TCP", 00:13:36.062 "adrfam": "IPv4", 00:13:36.062 "traddr": "10.0.0.3", 00:13:36.062 "trsvcid": "4420" 00:13:36.062 }, 00:13:36.062 "peer_address": { 00:13:36.062 "trtype": "TCP", 00:13:36.062 "adrfam": "IPv4", 00:13:36.062 "traddr": "10.0.0.1", 00:13:36.062 "trsvcid": "38624" 00:13:36.062 }, 00:13:36.062 "auth": { 00:13:36.062 "state": "completed", 00:13:36.062 "digest": "sha512", 00:13:36.062 "dhgroup": "ffdhe6144" 00:13:36.062 } 00:13:36.062 } 00:13:36.062 ]' 00:13:36.062 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.322 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:36.322 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.322 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:36.322 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.322 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.322 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.322 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.581 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:36.581 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:37.149 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.149 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:37.149 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.149 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.149 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.149 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:37.149 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:37.149 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:37.149 16:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:37.723 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:37.723 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.723 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:37.723 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:37.723 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:37.723 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.723 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.723 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.723 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.723 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.723 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.723 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.723 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.290 00:13:38.290 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:38.290 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.290 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.549 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.549 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.549 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.549 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.549 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.549 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.549 { 00:13:38.549 "cntlid": 137, 00:13:38.549 "qid": 0, 00:13:38.549 "state": "enabled", 00:13:38.549 "thread": "nvmf_tgt_poll_group_000", 00:13:38.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:38.549 "listen_address": { 00:13:38.549 "trtype": "TCP", 00:13:38.549 "adrfam": "IPv4", 00:13:38.549 "traddr": "10.0.0.3", 00:13:38.549 "trsvcid": "4420" 00:13:38.549 }, 00:13:38.549 "peer_address": { 00:13:38.549 "trtype": "TCP", 00:13:38.549 "adrfam": "IPv4", 00:13:38.549 "traddr": "10.0.0.1", 00:13:38.549 "trsvcid": "38640" 00:13:38.549 }, 00:13:38.549 "auth": { 00:13:38.549 "state": "completed", 00:13:38.549 "digest": "sha512", 00:13:38.549 "dhgroup": "ffdhe8192" 00:13:38.549 } 00:13:38.549 } 00:13:38.549 ]' 00:13:38.549 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.549 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:38.549 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.549 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:38.549 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.549 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.549 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.549 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.808 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:13:38.808 16:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:13:39.745 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.745 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:39.745 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.745 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.745 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.745 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.745 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:39.745 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:40.004 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:40.004 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.005 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:40.005 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:40.005 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:40.005 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.005 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.005 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.005 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.005 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.005 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.005 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.005 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.573 00:13:40.573 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.573 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.573 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.831 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.831 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.832 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.832 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.832 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.832 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.832 { 00:13:40.832 "cntlid": 139, 00:13:40.832 "qid": 0, 00:13:40.832 "state": "enabled", 00:13:40.832 "thread": "nvmf_tgt_poll_group_000", 00:13:40.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:40.832 "listen_address": { 00:13:40.832 "trtype": "TCP", 00:13:40.832 "adrfam": "IPv4", 00:13:40.832 "traddr": "10.0.0.3", 00:13:40.832 "trsvcid": "4420" 00:13:40.832 }, 00:13:40.832 "peer_address": { 00:13:40.832 "trtype": "TCP", 00:13:40.832 "adrfam": "IPv4", 00:13:40.832 "traddr": "10.0.0.1", 00:13:40.832 "trsvcid": "38680" 00:13:40.832 }, 00:13:40.832 "auth": { 00:13:40.832 "state": "completed", 00:13:40.832 "digest": "sha512", 00:13:40.832 "dhgroup": "ffdhe8192" 00:13:40.832 } 00:13:40.832 } 00:13:40.832 ]' 00:13:40.832 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.091 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.091 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.091 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:41.091 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.091 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.091 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.091 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.350 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:13:41.350 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: --dhchap-ctrl-secret DHHC-1:02:NDA2ZDc4YWNkYzAyNGM1MGM3Y2RiNDdhOGZlNGU1MTIzZjdhNDYyZGQyNTkzMmFhi/RMQw==: 00:13:41.915 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.915 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:41.915 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.915 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.915 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.915 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.915 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:41.915 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:42.482 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:42.482 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.482 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:42.482 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:42.482 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:42.482 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.482 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.482 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.482 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.482 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.482 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.482 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.482 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.049 00:13:43.049 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.049 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.049 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.049 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.049 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.049 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.049 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.308 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.308 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.308 { 00:13:43.308 "cntlid": 141, 00:13:43.308 "qid": 0, 00:13:43.308 "state": "enabled", 00:13:43.308 "thread": "nvmf_tgt_poll_group_000", 00:13:43.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:43.308 "listen_address": { 00:13:43.308 "trtype": "TCP", 00:13:43.308 "adrfam": "IPv4", 00:13:43.308 "traddr": "10.0.0.3", 00:13:43.308 "trsvcid": "4420" 00:13:43.308 }, 00:13:43.308 "peer_address": { 00:13:43.308 "trtype": "TCP", 00:13:43.309 "adrfam": "IPv4", 00:13:43.309 "traddr": "10.0.0.1", 00:13:43.309 "trsvcid": "45916" 00:13:43.309 }, 00:13:43.309 "auth": { 00:13:43.309 "state": "completed", 00:13:43.309 "digest": "sha512", 00:13:43.309 "dhgroup": "ffdhe8192" 00:13:43.309 } 00:13:43.309 } 00:13:43.309 ]' 00:13:43.309 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.309 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:43.309 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.309 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:43.309 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.309 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.309 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.309 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.567 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:13:43.567 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:01:M2FiYTMzOTI3Y2JmYWY3NjZiZjc3YmZjZjYzYTUyZDl0/yN4: 00:13:44.503 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.503 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:44.503 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.503 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.503 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.503 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.503 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:44.503 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:44.503 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:44.503 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.503 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:44.503 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:44.503 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:44.503 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.503 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:13:44.503 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.503 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.761 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.761 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:44.761 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:44.761 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.328 00:13:45.328 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.328 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.328 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.587 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.587 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.587 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.587 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.587 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.587 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.587 { 00:13:45.587 "cntlid": 143, 00:13:45.587 "qid": 0, 00:13:45.587 "state": "enabled", 00:13:45.587 "thread": "nvmf_tgt_poll_group_000", 00:13:45.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:45.587 "listen_address": { 00:13:45.587 "trtype": "TCP", 00:13:45.587 "adrfam": "IPv4", 00:13:45.587 "traddr": "10.0.0.3", 00:13:45.587 "trsvcid": "4420" 00:13:45.587 }, 00:13:45.587 "peer_address": { 00:13:45.587 "trtype": "TCP", 00:13:45.587 "adrfam": "IPv4", 00:13:45.587 "traddr": "10.0.0.1", 00:13:45.587 "trsvcid": "45944" 00:13:45.587 }, 00:13:45.587 "auth": { 00:13:45.587 "state": "completed", 00:13:45.587 "digest": "sha512", 00:13:45.587 "dhgroup": "ffdhe8192" 00:13:45.587 } 00:13:45.587 } 00:13:45.587 ]' 00:13:45.587 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.587 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:45.587 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.587 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:45.587 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.587 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.587 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.587 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.846 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:45.846 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:46.413 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.414 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:46.414 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.414 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.414 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.414 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:46.414 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:46.414 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:46.414 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:46.414 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:46.414 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:46.673 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:46.673 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.673 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:46.673 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:46.673 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:46.673 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.673 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.673 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.673 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.673 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.673 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.673 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.673 16:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.608 00:13:47.608 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.608 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.608 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.608 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.608 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.608 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.608 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.608 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.608 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.608 { 00:13:47.608 "cntlid": 145, 00:13:47.608 "qid": 0, 00:13:47.608 "state": "enabled", 00:13:47.608 "thread": "nvmf_tgt_poll_group_000", 00:13:47.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:47.608 "listen_address": { 00:13:47.608 "trtype": "TCP", 00:13:47.608 "adrfam": "IPv4", 00:13:47.608 "traddr": "10.0.0.3", 00:13:47.608 "trsvcid": "4420" 00:13:47.608 }, 00:13:47.608 "peer_address": { 00:13:47.608 "trtype": "TCP", 00:13:47.608 "adrfam": "IPv4", 00:13:47.608 "traddr": "10.0.0.1", 00:13:47.608 "trsvcid": "45980" 00:13:47.608 }, 00:13:47.608 "auth": { 00:13:47.608 "state": "completed", 00:13:47.608 "digest": "sha512", 00:13:47.608 "dhgroup": "ffdhe8192" 00:13:47.608 } 00:13:47.608 } 00:13:47.608 ]' 00:13:47.608 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.608 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:47.608 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.867 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:47.867 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.867 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.867 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.867 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.127 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:13:48.127 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:00:NThlZmZlNGJlMmQ0YTYxYmJjOTJlNDc1ODQzNWQ2MjA3MjU4NDMzYzMzMTliM2NiG4m6ig==: --dhchap-ctrl-secret DHHC-1:03:Y2U1MzM5NjE1ODE1MjVjYzdiMWUwZTdhY2U0NTBmMGJhNmJkOTU5NGVhZmI0ODJiZWJiODMyNWU5OTI4Y2RjZpZ+Uoc=: 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:48.694 16:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:49.262 request: 00:13:49.262 { 00:13:49.262 "name": "nvme0", 00:13:49.262 "trtype": "tcp", 00:13:49.262 "traddr": "10.0.0.3", 00:13:49.262 "adrfam": "ipv4", 00:13:49.262 "trsvcid": "4420", 00:13:49.262 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:49.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:49.262 "prchk_reftag": false, 00:13:49.262 "prchk_guard": false, 00:13:49.262 "hdgst": false, 00:13:49.262 "ddgst": false, 00:13:49.262 "dhchap_key": "key2", 00:13:49.262 "allow_unrecognized_csi": false, 00:13:49.262 "method": "bdev_nvme_attach_controller", 00:13:49.263 "req_id": 1 00:13:49.263 } 00:13:49.263 Got JSON-RPC error response 00:13:49.263 response: 00:13:49.263 { 00:13:49.263 "code": -5, 00:13:49.263 "message": "Input/output error" 00:13:49.263 } 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:49.263 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:50.199 request: 00:13:50.199 { 00:13:50.199 "name": "nvme0", 00:13:50.199 "trtype": "tcp", 00:13:50.199 "traddr": "10.0.0.3", 00:13:50.199 "adrfam": "ipv4", 00:13:50.199 "trsvcid": "4420", 00:13:50.199 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:50.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:50.200 "prchk_reftag": false, 00:13:50.200 "prchk_guard": false, 00:13:50.200 "hdgst": false, 00:13:50.200 "ddgst": false, 00:13:50.200 "dhchap_key": "key1", 00:13:50.200 "dhchap_ctrlr_key": "ckey2", 00:13:50.200 "allow_unrecognized_csi": false, 00:13:50.200 "method": "bdev_nvme_attach_controller", 00:13:50.200 "req_id": 1 00:13:50.200 } 00:13:50.200 Got JSON-RPC error response 00:13:50.200 response: 00:13:50.200 { 00:13:50.200 "code": -5, 00:13:50.200 "message": "Input/output error" 00:13:50.200 } 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.200 16:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.768 request: 00:13:50.768 { 00:13:50.768 "name": "nvme0", 00:13:50.768 "trtype": "tcp", 00:13:50.768 "traddr": "10.0.0.3", 00:13:50.768 "adrfam": "ipv4", 00:13:50.768 "trsvcid": "4420", 00:13:50.768 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:50.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:50.768 "prchk_reftag": false, 00:13:50.768 "prchk_guard": false, 00:13:50.768 "hdgst": false, 00:13:50.768 "ddgst": false, 00:13:50.768 "dhchap_key": "key1", 00:13:50.768 "dhchap_ctrlr_key": "ckey1", 00:13:50.768 "allow_unrecognized_csi": false, 00:13:50.768 "method": "bdev_nvme_attach_controller", 00:13:50.768 "req_id": 1 00:13:50.768 } 00:13:50.768 Got JSON-RPC error response 00:13:50.768 response: 00:13:50.768 { 00:13:50.768 "code": -5, 00:13:50.768 "message": "Input/output error" 00:13:50.768 } 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 81056 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81056 ']' 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81056 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81056 00:13:50.768 killing process with pid 81056 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81056' 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81056 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81056 00:13:50.768 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:51.027 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:51.027 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:51.027 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.027 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=84095 00:13:51.027 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:51.027 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 84095 00:13:51.028 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 84095 ']' 00:13:51.028 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.028 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.028 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.028 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.028 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.287 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.287 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:51.287 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:51.287 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:51.287 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.287 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.287 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:51.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.287 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 84095 00:13:51.287 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 84095 ']' 00:13:51.287 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.287 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.287 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.287 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.287 16:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.549 null0 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1rX 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.SiQ ]] 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SiQ 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.549 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.b24 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Twt ]] 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Twt 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZMI 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Pt6 ]] 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Pt6 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.UWq 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:51.831 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:51.832 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.832 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:51.832 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:51.832 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:51.832 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.832 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:13:51.832 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.832 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.832 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.832 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:51.832 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.832 16:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.774 nvme0n1 00:13:52.774 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.774 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.774 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.033 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.033 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.033 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.033 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.033 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.033 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.033 { 00:13:53.033 "cntlid": 1, 00:13:53.033 "qid": 0, 00:13:53.033 "state": "enabled", 00:13:53.033 "thread": "nvmf_tgt_poll_group_000", 00:13:53.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:53.033 "listen_address": { 00:13:53.033 "trtype": "TCP", 00:13:53.033 "adrfam": "IPv4", 00:13:53.033 "traddr": "10.0.0.3", 00:13:53.033 "trsvcid": "4420" 00:13:53.033 }, 00:13:53.033 "peer_address": { 00:13:53.033 "trtype": "TCP", 00:13:53.033 "adrfam": "IPv4", 00:13:53.033 "traddr": "10.0.0.1", 00:13:53.033 "trsvcid": "46036" 00:13:53.033 }, 00:13:53.033 "auth": { 00:13:53.033 "state": "completed", 00:13:53.033 "digest": "sha512", 00:13:53.033 "dhgroup": "ffdhe8192" 00:13:53.033 } 00:13:53.033 } 00:13:53.033 ]' 00:13:53.033 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.034 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:53.034 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.293 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:53.293 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.293 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.293 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.293 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.552 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:53.552 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:54.121 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.380 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:54.380 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.380 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.380 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.380 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key3 00:13:54.380 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.380 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.380 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.380 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:54.380 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:54.639 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:54.639 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:54.639 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:54.639 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:54.639 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:54.639 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:54.639 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:54.639 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:54.639 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.639 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.898 request: 00:13:54.898 { 00:13:54.898 "name": "nvme0", 00:13:54.898 "trtype": "tcp", 00:13:54.898 "traddr": "10.0.0.3", 00:13:54.898 "adrfam": "ipv4", 00:13:54.898 "trsvcid": "4420", 00:13:54.898 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:54.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:54.898 "prchk_reftag": false, 00:13:54.898 "prchk_guard": false, 00:13:54.898 "hdgst": false, 00:13:54.898 "ddgst": false, 00:13:54.898 "dhchap_key": "key3", 00:13:54.898 "allow_unrecognized_csi": false, 00:13:54.898 "method": "bdev_nvme_attach_controller", 00:13:54.898 "req_id": 1 00:13:54.898 } 00:13:54.898 Got JSON-RPC error response 00:13:54.898 response: 00:13:54.898 { 00:13:54.898 "code": -5, 00:13:54.898 "message": "Input/output error" 00:13:54.898 } 00:13:54.898 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:54.898 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:54.898 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:54.898 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:54.898 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:54.898 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:54.898 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:54.898 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:55.157 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:55.157 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:55.157 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:55.157 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:55.157 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:55.157 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:55.157 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:55.157 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:55.157 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:55.157 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:55.725 request: 00:13:55.725 { 00:13:55.725 "name": "nvme0", 00:13:55.725 "trtype": "tcp", 00:13:55.725 "traddr": "10.0.0.3", 00:13:55.725 "adrfam": "ipv4", 00:13:55.725 "trsvcid": "4420", 00:13:55.725 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:55.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:55.725 "prchk_reftag": false, 00:13:55.725 "prchk_guard": false, 00:13:55.725 "hdgst": false, 00:13:55.725 "ddgst": false, 00:13:55.725 "dhchap_key": "key3", 00:13:55.725 "allow_unrecognized_csi": false, 00:13:55.725 "method": "bdev_nvme_attach_controller", 00:13:55.725 "req_id": 1 00:13:55.725 } 00:13:55.725 Got JSON-RPC error response 00:13:55.725 response: 00:13:55.725 { 00:13:55.725 "code": -5, 00:13:55.725 "message": "Input/output error" 00:13:55.725 } 00:13:55.725 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:55.725 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:55.725 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:55.725 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:55.725 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:55.725 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:55.725 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:55.725 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:55.725 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:55.725 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:55.984 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:55.985 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:56.244 request: 00:13:56.244 { 00:13:56.244 "name": "nvme0", 00:13:56.244 "trtype": "tcp", 00:13:56.244 "traddr": "10.0.0.3", 00:13:56.244 "adrfam": "ipv4", 00:13:56.244 "trsvcid": "4420", 00:13:56.244 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:56.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:13:56.244 "prchk_reftag": false, 00:13:56.244 "prchk_guard": false, 00:13:56.244 "hdgst": false, 00:13:56.244 "ddgst": false, 00:13:56.244 "dhchap_key": "key0", 00:13:56.244 "dhchap_ctrlr_key": "key1", 00:13:56.244 "allow_unrecognized_csi": false, 00:13:56.244 "method": "bdev_nvme_attach_controller", 00:13:56.244 "req_id": 1 00:13:56.244 } 00:13:56.244 Got JSON-RPC error response 00:13:56.244 response: 00:13:56.244 { 00:13:56.244 "code": -5, 00:13:56.244 "message": "Input/output error" 00:13:56.244 } 00:13:56.244 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:56.244 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:56.244 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:56.244 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:56.244 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:56.244 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:56.244 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:56.813 nvme0n1 00:13:56.813 16:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:56.813 16:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:56.813 16:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.072 16:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.072 16:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.072 16:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.344 16:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 00:13:57.344 16:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.345 16:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.345 16:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.345 16:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:57.345 16:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:57.345 16:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:58.281 nvme0n1 00:13:58.281 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:58.281 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.281 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:58.540 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.540 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:58.540 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.540 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.540 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.540 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:58.540 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:58.540 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.799 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.799 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:58.799 16:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid ecede086-b106-482f-ba49-ce4e74dc3f2b -l 0 --dhchap-secret DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: --dhchap-ctrl-secret DHHC-1:03:NzdlNGY1NDU3NzEwMTYyYWYxOTVjOWJjMTRlNTk4OWE4MmQxZDBhZGY3ZjJjNDg3ZGYwOTBjYzllYWJmOTI4NSA1g84=: 00:13:59.739 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:59.739 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:59.739 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:59.739 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:59.739 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:59.739 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:59.739 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:59.739 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.739 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.998 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:59.998 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:59.998 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:59.998 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:59.998 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.998 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:59.998 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.998 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:59.998 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:59.998 16:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:00.567 request: 00:14:00.567 { 00:14:00.567 "name": "nvme0", 00:14:00.567 "trtype": "tcp", 00:14:00.567 "traddr": "10.0.0.3", 00:14:00.567 "adrfam": "ipv4", 00:14:00.567 "trsvcid": "4420", 00:14:00.567 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:00.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b", 00:14:00.567 "prchk_reftag": false, 00:14:00.567 "prchk_guard": false, 00:14:00.567 "hdgst": false, 00:14:00.567 "ddgst": false, 00:14:00.567 "dhchap_key": "key1", 00:14:00.567 "allow_unrecognized_csi": false, 00:14:00.567 "method": "bdev_nvme_attach_controller", 00:14:00.567 "req_id": 1 00:14:00.567 } 00:14:00.567 Got JSON-RPC error response 00:14:00.567 response: 00:14:00.567 { 00:14:00.567 "code": -5, 00:14:00.567 "message": "Input/output error" 00:14:00.567 } 00:14:00.567 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:00.567 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:00.567 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:00.567 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:00.567 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:00.568 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:00.568 16:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:01.507 nvme0n1 00:14:01.507 16:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:14:01.507 16:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:14:01.507 16:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.073 16:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.073 16:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.073 16:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.332 16:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:14:02.332 16:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.332 16:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.332 16:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.332 16:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:14:02.333 16:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:02.333 16:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:02.592 nvme0n1 00:14:02.593 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:14:02.593 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.593 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:14:02.852 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.852 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.852 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: '' 2s 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: ]] 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZDkwNGUxYmI2NDMzOTRiMTJhODBlMzMwODFmODJlOGG2w0Wr: 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:03.112 16:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:05.709 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:05.709 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:05.709 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:05.709 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:05.709 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:05.709 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:05.709 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:05.709 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:05.709 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.709 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.709 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.709 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: 2s 00:14:05.709 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:05.710 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:05.710 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:05.710 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: 00:14:05.710 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:05.710 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:05.710 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:05.710 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: ]] 00:14:05.710 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YjEwNWFjZWJhZDhiNDI3ZTZiOWYxODBhMzA4YjUxYjc3MDVmNTQ5ZDE0MzQ1NGJlxjHZGg==: 00:14:05.710 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:05.710 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:07.611 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:07.611 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:07.611 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:07.611 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:07.611 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:07.611 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:07.611 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:07.611 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.611 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:07.611 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.611 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.611 16:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.611 16:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:07.611 16:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:07.611 16:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:08.546 nvme0n1 00:14:08.546 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:08.546 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.546 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.546 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.546 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:08.546 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:09.113 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:09.113 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:09.113 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.372 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.372 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:14:09.372 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.372 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.372 16:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.372 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:09.372 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:09.630 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:09.630 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.630 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:09.889 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.889 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:09.889 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.889 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.889 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.889 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:09.889 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:09.889 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:09.889 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:09.889 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.889 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:09.889 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.889 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:09.889 16:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:10.456 request: 00:14:10.456 { 00:14:10.456 "name": "nvme0", 00:14:10.456 "dhchap_key": "key1", 00:14:10.456 "dhchap_ctrlr_key": "key3", 00:14:10.456 "method": "bdev_nvme_set_keys", 00:14:10.456 "req_id": 1 00:14:10.456 } 00:14:10.456 Got JSON-RPC error response 00:14:10.456 response: 00:14:10.456 { 00:14:10.456 "code": -13, 00:14:10.456 "message": "Permission denied" 00:14:10.456 } 00:14:10.456 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:10.456 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:10.456 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:10.456 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:10.456 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:10.456 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:10.456 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.714 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:10.714 16:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:12.089 16:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:12.089 16:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:12.089 16:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.089 16:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:12.089 16:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:12.089 16:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.089 16:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.089 16:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.089 16:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:12.089 16:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:12.089 16:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:13.025 nvme0n1 00:14:13.025 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:13.025 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.025 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.025 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.025 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:13.025 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:13.026 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:13.026 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:13.284 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.284 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:13.284 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.285 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:13.285 16:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:13.853 request: 00:14:13.853 { 00:14:13.853 "name": "nvme0", 00:14:13.853 "dhchap_key": "key2", 00:14:13.853 "dhchap_ctrlr_key": "key0", 00:14:13.853 "method": "bdev_nvme_set_keys", 00:14:13.853 "req_id": 1 00:14:13.853 } 00:14:13.853 Got JSON-RPC error response 00:14:13.853 response: 00:14:13.853 { 00:14:13.853 "code": -13, 00:14:13.853 "message": "Permission denied" 00:14:13.853 } 00:14:13.853 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:13.853 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:13.853 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:13.853 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:13.853 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:13.853 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.853 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:14.111 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:14.111 16:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:15.503 16:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:15.503 16:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:15.503 16:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.503 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:15.503 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:15.503 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:15.503 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 81082 00:14:15.503 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81082 ']' 00:14:15.503 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81082 00:14:15.503 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:15.503 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.503 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81082 00:14:15.503 killing process with pid 81082 00:14:15.503 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:15.503 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:15.503 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81082' 00:14:15.503 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81082 00:14:15.503 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81082 00:14:15.761 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:15.761 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:15.761 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:15.761 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:15.761 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:15.762 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:15.762 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:15.762 rmmod nvme_tcp 00:14:15.762 rmmod nvme_fabrics 00:14:15.762 rmmod nvme_keyring 00:14:15.762 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:15.762 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:15.762 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:15.762 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 84095 ']' 00:14:15.762 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 84095 00:14:15.762 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 84095 ']' 00:14:15.762 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 84095 00:14:15.762 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:15.762 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.762 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84095 00:14:16.021 killing process with pid 84095 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84095' 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 84095 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 84095 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:16.021 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:16.022 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:16.022 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:16.022 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:16.022 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:16.022 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:16.022 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:16.022 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:16.022 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:16.281 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:16.281 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:16.281 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:16.281 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:16.281 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:16.281 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.281 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.281 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.281 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:16.281 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1rX /tmp/spdk.key-sha256.b24 /tmp/spdk.key-sha384.ZMI /tmp/spdk.key-sha512.UWq /tmp/spdk.key-sha512.SiQ /tmp/spdk.key-sha384.Twt /tmp/spdk.key-sha256.Pt6 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:16.281 00:14:16.281 real 3m7.015s 00:14:16.281 user 7m29.523s 00:14:16.281 sys 0m27.731s 00:14:16.281 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.281 ************************************ 00:14:16.281 END TEST nvmf_auth_target 00:14:16.281 16:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.281 ************************************ 00:14:16.281 16:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:16.281 16:49:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:16.281 16:49:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:16.281 16:49:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.281 16:49:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.281 ************************************ 00:14:16.281 START TEST nvmf_bdevio_no_huge 00:14:16.281 ************************************ 00:14:16.281 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:16.540 * Looking for test storage... 00:14:16.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:16.540 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:16.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.541 --rc genhtml_branch_coverage=1 00:14:16.541 --rc genhtml_function_coverage=1 00:14:16.541 --rc genhtml_legend=1 00:14:16.541 --rc geninfo_all_blocks=1 00:14:16.541 --rc geninfo_unexecuted_blocks=1 00:14:16.541 00:14:16.541 ' 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:16.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.541 --rc genhtml_branch_coverage=1 00:14:16.541 --rc genhtml_function_coverage=1 00:14:16.541 --rc genhtml_legend=1 00:14:16.541 --rc geninfo_all_blocks=1 00:14:16.541 --rc geninfo_unexecuted_blocks=1 00:14:16.541 00:14:16.541 ' 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:16.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.541 --rc genhtml_branch_coverage=1 00:14:16.541 --rc genhtml_function_coverage=1 00:14:16.541 --rc genhtml_legend=1 00:14:16.541 --rc geninfo_all_blocks=1 00:14:16.541 --rc geninfo_unexecuted_blocks=1 00:14:16.541 00:14:16.541 ' 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:16.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.541 --rc genhtml_branch_coverage=1 00:14:16.541 --rc genhtml_function_coverage=1 00:14:16.541 --rc genhtml_legend=1 00:14:16.541 --rc geninfo_all_blocks=1 00:14:16.541 --rc geninfo_unexecuted_blocks=1 00:14:16.541 00:14:16.541 ' 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.541 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:16.542 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:16.542 Cannot find device "nvmf_init_br" 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:16.542 Cannot find device "nvmf_init_br2" 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:16.542 Cannot find device "nvmf_tgt_br" 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:16.542 Cannot find device "nvmf_tgt_br2" 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:16.542 Cannot find device "nvmf_init_br" 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:16.542 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:16.801 Cannot find device "nvmf_init_br2" 00:14:16.801 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:16.801 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:16.801 Cannot find device "nvmf_tgt_br" 00:14:16.801 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:16.801 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:16.801 Cannot find device "nvmf_tgt_br2" 00:14:16.801 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:16.801 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:16.801 Cannot find device "nvmf_br" 00:14:16.801 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:16.801 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:16.801 Cannot find device "nvmf_init_if" 00:14:16.801 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:16.801 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:16.801 Cannot find device "nvmf_init_if2" 00:14:16.801 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:16.801 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:16.801 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:16.801 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:16.801 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:16.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:16.802 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:17.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:17.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:14:17.061 00:14:17.061 --- 10.0.0.3 ping statistics --- 00:14:17.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.061 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:17.061 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:17.061 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:14:17.061 00:14:17.061 --- 10.0.0.4 ping statistics --- 00:14:17.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.061 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:17.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:17.061 00:14:17.061 --- 10.0.0.1 ping statistics --- 00:14:17.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.061 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:17.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:14:17.061 00:14:17.061 --- 10.0.0.2 ping statistics --- 00:14:17.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.061 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=84732 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 84732 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 84732 ']' 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.061 16:49:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:17.061 [2024-11-29 16:49:40.757993] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:17.061 [2024-11-29 16:49:40.758088] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:17.321 [2024-11-29 16:49:40.908227] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:17.321 [2024-11-29 16:49:40.944512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.321 [2024-11-29 16:49:41.016806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.321 [2024-11-29 16:49:41.016866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.321 [2024-11-29 16:49:41.016880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.321 [2024-11-29 16:49:41.016890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.321 [2024-11-29 16:49:41.016899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.321 [2024-11-29 16:49:41.018366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:17.321 [2024-11-29 16:49:41.018471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:17.321 [2024-11-29 16:49:41.022381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:17.321 [2024-11-29 16:49:41.022429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.321 [2024-11-29 16:49:41.031265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:18.258 [2024-11-29 16:49:41.881205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:18.258 Malloc0 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:18.258 [2024-11-29 16:49:41.929441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:18.258 { 00:14:18.258 "params": { 00:14:18.258 "name": "Nvme$subsystem", 00:14:18.258 "trtype": "$TEST_TRANSPORT", 00:14:18.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:18.258 "adrfam": "ipv4", 00:14:18.258 "trsvcid": "$NVMF_PORT", 00:14:18.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:18.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:18.258 "hdgst": ${hdgst:-false}, 00:14:18.258 "ddgst": ${ddgst:-false} 00:14:18.258 }, 00:14:18.258 "method": "bdev_nvme_attach_controller" 00:14:18.258 } 00:14:18.258 EOF 00:14:18.258 )") 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:14:18.258 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:18.258 "params": { 00:14:18.258 "name": "Nvme1", 00:14:18.258 "trtype": "tcp", 00:14:18.258 "traddr": "10.0.0.3", 00:14:18.258 "adrfam": "ipv4", 00:14:18.258 "trsvcid": "4420", 00:14:18.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:18.258 "hdgst": false, 00:14:18.258 "ddgst": false 00:14:18.258 }, 00:14:18.258 "method": "bdev_nvme_attach_controller" 00:14:18.258 }' 00:14:18.258 [2024-11-29 16:49:41.989885] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:18.258 [2024-11-29 16:49:41.989978] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid84774 ] 00:14:18.517 [2024-11-29 16:49:42.133671] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:18.517 [2024-11-29 16:49:42.150810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:18.517 [2024-11-29 16:49:42.212373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.518 [2024-11-29 16:49:42.212530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.518 [2024-11-29 16:49:42.212547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.518 [2024-11-29 16:49:42.228318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.777 I/O targets: 00:14:18.777 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:18.777 00:14:18.777 00:14:18.777 CUnit - A unit testing framework for C - Version 2.1-3 00:14:18.777 http://cunit.sourceforge.net/ 00:14:18.777 00:14:18.777 00:14:18.777 Suite: bdevio tests on: Nvme1n1 00:14:18.777 Test: blockdev write read block ...passed 00:14:18.777 Test: blockdev write zeroes read block ...passed 00:14:18.777 Test: blockdev write zeroes read no split ...passed 00:14:18.777 Test: blockdev write zeroes read split ...passed 00:14:18.777 Test: blockdev write zeroes read split partial ...passed 00:14:18.777 Test: blockdev reset ...[2024-11-29 16:49:42.468424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:18.777 [2024-11-29 16:49:42.468537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1029e60 (9): Bad file descriptor 00:14:18.777 [2024-11-29 16:49:42.483253] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:18.777 passed 00:14:18.777 Test: blockdev write read 8 blocks ...passed 00:14:18.777 Test: blockdev write read size > 128k ...passed 00:14:18.777 Test: blockdev write read invalid size ...passed 00:14:18.777 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:18.777 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:18.777 Test: blockdev write read max offset ...passed 00:14:18.777 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:18.777 Test: blockdev writev readv 8 blocks ...passed 00:14:18.777 Test: blockdev writev readv 30 x 1block ...passed 00:14:18.777 Test: blockdev writev readv block ...passed 00:14:18.777 Test: blockdev writev readv size > 128k ...passed 00:14:18.777 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:18.777 Test: blockdev comparev and writev ...[2024-11-29 16:49:42.491697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.777 [2024-11-29 16:49:42.491776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:18.777 [2024-11-29 16:49:42.491803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.777 [2024-11-29 16:49:42.491816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:18.777 [2024-11-29 16:49:42.492116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.777 [2024-11-29 16:49:42.492137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:18.777 [2024-11-29 16:49:42.492158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.777 [2024-11-29 16:49:42.492170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:18.777 [2024-11-29 16:49:42.492497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.777 [2024-11-29 16:49:42.492519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:18.777 [2024-11-29 16:49:42.492540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.777 [2024-11-29 16:49:42.492553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:18.777 [2024-11-29 16:49:42.492888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.777 [2024-11-29 16:49:42.492909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:18.777 [2024-11-29 16:49:42.492929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.777 [2024-11-29 16:49:42.492941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:18.777 passed 00:14:18.777 Test: blockdev nvme passthru rw ...passed 00:14:18.777 Test: blockdev nvme passthru vendor specific ...[2024-11-29 16:49:42.494041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:18.777 [2024-11-29 16:49:42.494074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:18.777 [2024-11-29 16:49:42.494206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:18.777 [2024-11-29 16:49:42.494232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:18.777 [2024-11-29 16:49:42.494376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:18.777 [2024-11-29 16:49:42.494396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:18.777 passed 00:14:18.777 Test: blockdev nvme admin passthru ...[2024-11-29 16:49:42.494534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:18.777 [2024-11-29 16:49:42.494558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:18.777 passed 00:14:18.777 Test: blockdev copy ...passed 00:14:18.777 00:14:18.777 Run Summary: Type Total Ran Passed Failed Inactive 00:14:18.777 suites 1 1 n/a 0 0 00:14:18.777 tests 23 23 23 0 0 00:14:18.777 asserts 152 152 152 0 n/a 00:14:18.777 00:14:18.777 Elapsed time = 0.173 seconds 00:14:19.378 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:19.378 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.378 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:19.378 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.378 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:19.379 rmmod nvme_tcp 00:14:19.379 rmmod nvme_fabrics 00:14:19.379 rmmod nvme_keyring 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 84732 ']' 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 84732 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 84732 ']' 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 84732 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84732 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:19.379 killing process with pid 84732 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84732' 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 84732 00:14:19.379 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 84732 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:19.638 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:19.897 00:14:19.897 real 0m3.529s 00:14:19.897 user 0m10.593s 00:14:19.897 sys 0m1.410s 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:19.897 ************************************ 00:14:19.897 END TEST nvmf_bdevio_no_huge 00:14:19.897 ************************************ 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:19.897 ************************************ 00:14:19.897 START TEST nvmf_tls 00:14:19.897 ************************************ 00:14:19.897 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:20.157 * Looking for test storage... 00:14:20.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:20.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.157 --rc genhtml_branch_coverage=1 00:14:20.157 --rc genhtml_function_coverage=1 00:14:20.157 --rc genhtml_legend=1 00:14:20.157 --rc geninfo_all_blocks=1 00:14:20.157 --rc geninfo_unexecuted_blocks=1 00:14:20.157 00:14:20.157 ' 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:20.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.157 --rc genhtml_branch_coverage=1 00:14:20.157 --rc genhtml_function_coverage=1 00:14:20.157 --rc genhtml_legend=1 00:14:20.157 --rc geninfo_all_blocks=1 00:14:20.157 --rc geninfo_unexecuted_blocks=1 00:14:20.157 00:14:20.157 ' 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:20.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.157 --rc genhtml_branch_coverage=1 00:14:20.157 --rc genhtml_function_coverage=1 00:14:20.157 --rc genhtml_legend=1 00:14:20.157 --rc geninfo_all_blocks=1 00:14:20.157 --rc geninfo_unexecuted_blocks=1 00:14:20.157 00:14:20.157 ' 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:20.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.157 --rc genhtml_branch_coverage=1 00:14:20.157 --rc genhtml_function_coverage=1 00:14:20.157 --rc genhtml_legend=1 00:14:20.157 --rc geninfo_all_blocks=1 00:14:20.157 --rc geninfo_unexecuted_blocks=1 00:14:20.157 00:14:20.157 ' 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.157 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:20.158 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:20.158 Cannot find device "nvmf_init_br" 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:20.158 Cannot find device "nvmf_init_br2" 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:20.158 Cannot find device "nvmf_tgt_br" 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:20.158 Cannot find device "nvmf_tgt_br2" 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:20.158 Cannot find device "nvmf_init_br" 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:20.158 Cannot find device "nvmf_init_br2" 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:20.158 Cannot find device "nvmf_tgt_br" 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:20.158 Cannot find device "nvmf_tgt_br2" 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:20.158 Cannot find device "nvmf_br" 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:20.158 Cannot find device "nvmf_init_if" 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:20.158 Cannot find device "nvmf_init_if2" 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:20.158 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:20.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:20.418 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:20.418 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:20.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:20.418 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:20.418 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:20.418 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:20.418 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:20.418 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:20.418 16:49:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:20.418 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:20.418 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:14:20.418 00:14:20.418 --- 10.0.0.3 ping statistics --- 00:14:20.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.418 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:20.418 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:20.418 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:14:20.418 00:14:20.418 --- 10.0.0.4 ping statistics --- 00:14:20.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.418 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:20.418 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:20.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:14:20.418 00:14:20.418 --- 10.0.0.1 ping statistics --- 00:14:20.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.418 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:20.419 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:20.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:20.419 00:14:20.419 --- 10.0.0.2 ping statistics --- 00:14:20.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.419 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:20.419 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.419 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:14:20.419 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:20.419 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.419 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:20.419 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:20.419 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.419 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:20.419 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:20.678 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:20.678 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:20.678 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:20.678 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.678 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85005 00:14:20.678 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:20.678 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85005 00:14:20.678 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85005 ']' 00:14:20.678 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.678 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.678 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.678 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.678 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.678 [2024-11-29 16:49:44.295690] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:20.679 [2024-11-29 16:49:44.295836] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.679 [2024-11-29 16:49:44.426396] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:20.679 [2024-11-29 16:49:44.457853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.938 [2024-11-29 16:49:44.480652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.938 [2024-11-29 16:49:44.480711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.938 [2024-11-29 16:49:44.480724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.938 [2024-11-29 16:49:44.480734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.938 [2024-11-29 16:49:44.480742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.938 [2024-11-29 16:49:44.481093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.938 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.938 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:20.938 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:20.938 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:20.938 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.938 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.938 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:20.938 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:21.198 true 00:14:21.198 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:21.198 16:49:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:21.456 16:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:21.456 16:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:21.456 16:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:21.714 16:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:21.714 16:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:21.972 16:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:21.972 16:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:21.972 16:49:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:22.229 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:22.229 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:22.796 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:22.796 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:22.796 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:22.796 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:22.796 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:22.796 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:22.796 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:23.364 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:23.364 16:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:23.623 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:23.623 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:23.623 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:23.882 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:23.882 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.LHI93z8cMi 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.F7xqEuLzX7 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.LHI93z8cMi 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.F7xqEuLzX7 00:14:24.141 16:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:24.400 16:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:24.658 [2024-11-29 16:49:48.437447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:24.916 16:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.LHI93z8cMi 00:14:24.916 16:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LHI93z8cMi 00:14:24.916 16:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:25.175 [2024-11-29 16:49:48.775223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.175 16:49:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:25.433 16:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:25.691 [2024-11-29 16:49:49.351425] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:25.691 [2024-11-29 16:49:49.351729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:25.691 16:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:25.950 malloc0 00:14:25.950 16:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:26.208 16:49:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LHI93z8cMi 00:14:26.467 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:26.724 16:49:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.LHI93z8cMi 00:14:38.930 Initializing NVMe Controllers 00:14:38.930 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:38.930 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:38.930 Initialization complete. Launching workers. 00:14:38.930 ======================================================== 00:14:38.930 Latency(us) 00:14:38.930 Device Information : IOPS MiB/s Average min max 00:14:38.930 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10037.90 39.21 6377.12 1547.35 12296.33 00:14:38.931 ======================================================== 00:14:38.931 Total : 10037.90 39.21 6377.12 1547.35 12296.33 00:14:38.931 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LHI93z8cMi 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LHI93z8cMi 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85244 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85244 /var/tmp/bdevperf.sock 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85244 ']' 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.931 [2024-11-29 16:50:00.719216] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:38.931 [2024-11-29 16:50:00.719334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85244 ] 00:14:38.931 [2024-11-29 16:50:00.840088] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:38.931 [2024-11-29 16:50:00.872266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.931 [2024-11-29 16:50:00.896258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.931 [2024-11-29 16:50:00.929828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:38.931 16:50:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LHI93z8cMi 00:14:38.931 16:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:38.931 [2024-11-29 16:50:01.505358] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:38.931 TLSTESTn1 00:14:38.931 16:50:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:38.931 Running I/O for 10 seconds... 00:14:40.306 4248.00 IOPS, 16.59 MiB/s [2024-11-29T16:50:05.032Z] 4286.50 IOPS, 16.74 MiB/s [2024-11-29T16:50:05.970Z] 4328.67 IOPS, 16.91 MiB/s [2024-11-29T16:50:06.906Z] 4372.50 IOPS, 17.08 MiB/s [2024-11-29T16:50:07.843Z] 4370.60 IOPS, 17.07 MiB/s [2024-11-29T16:50:08.779Z] 4308.67 IOPS, 16.83 MiB/s [2024-11-29T16:50:09.716Z] 4238.57 IOPS, 16.56 MiB/s [2024-11-29T16:50:11.091Z] 4184.88 IOPS, 16.35 MiB/s [2024-11-29T16:50:12.024Z] 4156.22 IOPS, 16.24 MiB/s [2024-11-29T16:50:12.024Z] 4133.50 IOPS, 16.15 MiB/s 00:14:48.232 Latency(us) 00:14:48.232 [2024-11-29T16:50:12.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.233 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:48.233 Verification LBA range: start 0x0 length 0x2000 00:14:48.233 TLSTESTn1 : 10.02 4139.21 16.17 0.00 0.00 30866.53 6047.19 40989.79 00:14:48.233 [2024-11-29T16:50:12.025Z] =================================================================================================================== 00:14:48.233 [2024-11-29T16:50:12.025Z] Total : 4139.21 16.17 0.00 0.00 30866.53 6047.19 40989.79 00:14:48.233 { 00:14:48.233 "results": [ 00:14:48.233 { 00:14:48.233 "job": "TLSTESTn1", 00:14:48.233 "core_mask": "0x4", 00:14:48.233 "workload": "verify", 00:14:48.233 "status": "finished", 00:14:48.233 "verify_range": { 00:14:48.233 "start": 0, 00:14:48.233 "length": 8192 00:14:48.233 }, 00:14:48.233 "queue_depth": 128, 00:14:48.233 "io_size": 4096, 00:14:48.233 "runtime": 10.017126, 00:14:48.233 "iops": 4139.211186921279, 00:14:48.233 "mibps": 16.168793698911244, 00:14:48.233 "io_failed": 0, 00:14:48.233 "io_timeout": 0, 00:14:48.233 "avg_latency_us": 30866.529486924814, 00:14:48.233 "min_latency_us": 6047.185454545454, 00:14:48.233 "max_latency_us": 40989.78909090909 00:14:48.233 } 00:14:48.233 ], 00:14:48.233 "core_count": 1 00:14:48.233 } 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 85244 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85244 ']' 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85244 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85244 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:48.233 killing process with pid 85244 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85244' 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85244 00:14:48.233 Received shutdown signal, test time was about 10.000000 seconds 00:14:48.233 00:14:48.233 Latency(us) 00:14:48.233 [2024-11-29T16:50:12.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.233 [2024-11-29T16:50:12.025Z] =================================================================================================================== 00:14:48.233 [2024-11-29T16:50:12.025Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85244 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F7xqEuLzX7 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F7xqEuLzX7 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F7xqEuLzX7 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.F7xqEuLzX7 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85371 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85371 /var/tmp/bdevperf.sock 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85371 ']' 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.233 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.233 [2024-11-29 16:50:11.958660] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:48.233 [2024-11-29 16:50:11.958788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85371 ] 00:14:48.492 [2024-11-29 16:50:12.088894] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:48.492 [2024-11-29 16:50:12.108610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.492 [2024-11-29 16:50:12.127066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.492 [2024-11-29 16:50:12.154022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:48.492 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.492 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:48.492 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F7xqEuLzX7 00:14:48.751 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:49.009 [2024-11-29 16:50:12.781395] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:49.009 [2024-11-29 16:50:12.787908] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:49.009 [2024-11-29 16:50:12.787961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1379700 (107): Transport endpoint is not connected 00:14:49.009 [2024-11-29 16:50:12.788934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1379700 (9): Bad file descriptor 00:14:49.009 [2024-11-29 16:50:12.789932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:49.009 [2024-11-29 16:50:12.789973] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:49.009 [2024-11-29 16:50:12.789983] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:49.009 [2024-11-29 16:50:12.789996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:49.009 request: 00:14:49.009 { 00:14:49.009 "name": "TLSTEST", 00:14:49.009 "trtype": "tcp", 00:14:49.009 "traddr": "10.0.0.3", 00:14:49.009 "adrfam": "ipv4", 00:14:49.009 "trsvcid": "4420", 00:14:49.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.009 "prchk_reftag": false, 00:14:49.009 "prchk_guard": false, 00:14:49.009 "hdgst": false, 00:14:49.009 "ddgst": false, 00:14:49.009 "psk": "key0", 00:14:49.009 "allow_unrecognized_csi": false, 00:14:49.009 "method": "bdev_nvme_attach_controller", 00:14:49.009 "req_id": 1 00:14:49.009 } 00:14:49.009 Got JSON-RPC error response 00:14:49.009 response: 00:14:49.009 { 00:14:49.009 "code": -5, 00:14:49.009 "message": "Input/output error" 00:14:49.009 } 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85371 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85371 ']' 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85371 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85371 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:49.266 killing process with pid 85371 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85371' 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85371 00:14:49.266 Received shutdown signal, test time was about 10.000000 seconds 00:14:49.266 00:14:49.266 Latency(us) 00:14:49.266 [2024-11-29T16:50:13.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.266 [2024-11-29T16:50:13.058Z] =================================================================================================================== 00:14:49.266 [2024-11-29T16:50:13.058Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85371 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LHI93z8cMi 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LHI93z8cMi 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LHI93z8cMi 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LHI93z8cMi 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85392 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85392 /var/tmp/bdevperf.sock 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85392 ']' 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.266 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.266 [2024-11-29 16:50:13.025399] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:49.266 [2024-11-29 16:50:13.025540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85392 ] 00:14:49.524 [2024-11-29 16:50:13.163020] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:49.524 [2024-11-29 16:50:13.183854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.524 [2024-11-29 16:50:13.202657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.524 [2024-11-29 16:50:13.231943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.458 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.458 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:50.458 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LHI93z8cMi 00:14:50.716 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:50.976 [2024-11-29 16:50:14.639894] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:50.976 [2024-11-29 16:50:14.648110] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:50.976 [2024-11-29 16:50:14.648166] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:50.976 [2024-11-29 16:50:14.648231] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:50.976 [2024-11-29 16:50:14.649194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aa700 (107): Transport endpoint is not connected 00:14:50.976 [2024-11-29 16:50:14.650185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aa700 (9): Bad file descriptor 00:14:50.976 [2024-11-29 16:50:14.651196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:50.976 [2024-11-29 16:50:14.651236] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:50.976 [2024-11-29 16:50:14.651246] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:50.976 [2024-11-29 16:50:14.651260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:50.976 request: 00:14:50.976 { 00:14:50.976 "name": "TLSTEST", 00:14:50.976 "trtype": "tcp", 00:14:50.976 "traddr": "10.0.0.3", 00:14:50.976 "adrfam": "ipv4", 00:14:50.976 "trsvcid": "4420", 00:14:50.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.976 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:50.976 "prchk_reftag": false, 00:14:50.976 "prchk_guard": false, 00:14:50.976 "hdgst": false, 00:14:50.976 "ddgst": false, 00:14:50.976 "psk": "key0", 00:14:50.976 "allow_unrecognized_csi": false, 00:14:50.976 "method": "bdev_nvme_attach_controller", 00:14:50.976 "req_id": 1 00:14:50.976 } 00:14:50.976 Got JSON-RPC error response 00:14:50.976 response: 00:14:50.976 { 00:14:50.976 "code": -5, 00:14:50.976 "message": "Input/output error" 00:14:50.976 } 00:14:50.976 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85392 00:14:50.976 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85392 ']' 00:14:50.976 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85392 00:14:50.976 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:50.976 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.976 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85392 00:14:50.976 killing process with pid 85392 00:14:50.976 Received shutdown signal, test time was about 10.000000 seconds 00:14:50.976 00:14:50.976 Latency(us) 00:14:50.976 [2024-11-29T16:50:14.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.976 [2024-11-29T16:50:14.768Z] =================================================================================================================== 00:14:50.976 [2024-11-29T16:50:14.768Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:50.976 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:50.976 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:50.976 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85392' 00:14:50.976 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85392 00:14:50.976 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85392 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LHI93z8cMi 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LHI93z8cMi 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LHI93z8cMi 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LHI93z8cMi 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85421 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85421 /var/tmp/bdevperf.sock 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85421 ']' 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.235 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:51.235 [2024-11-29 16:50:14.886537] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:51.235 [2024-11-29 16:50:14.886674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85421 ] 00:14:51.235 [2024-11-29 16:50:15.014316] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:51.493 [2024-11-29 16:50:15.044826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.493 [2024-11-29 16:50:15.068516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.493 [2024-11-29 16:50:15.101172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.493 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.493 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:51.493 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LHI93z8cMi 00:14:51.750 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:52.036 [2024-11-29 16:50:15.743847] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:52.036 [2024-11-29 16:50:15.748508] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:52.036 [2024-11-29 16:50:15.748561] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:52.036 [2024-11-29 16:50:15.748622] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:52.036 [2024-11-29 16:50:15.749263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fef700 (107): Transport endpoint is not connected 00:14:52.036 [2024-11-29 16:50:15.750252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fef700 (9): Bad file descriptor 00:14:52.036 [2024-11-29 16:50:15.751249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:52.036 [2024-11-29 16:50:15.751284] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:52.037 [2024-11-29 16:50:15.751311] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:52.037 [2024-11-29 16:50:15.751323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:52.037 request: 00:14:52.037 { 00:14:52.037 "name": "TLSTEST", 00:14:52.037 "trtype": "tcp", 00:14:52.037 "traddr": "10.0.0.3", 00:14:52.037 "adrfam": "ipv4", 00:14:52.037 "trsvcid": "4420", 00:14:52.037 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:52.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:52.037 "prchk_reftag": false, 00:14:52.037 "prchk_guard": false, 00:14:52.037 "hdgst": false, 00:14:52.037 "ddgst": false, 00:14:52.037 "psk": "key0", 00:14:52.037 "allow_unrecognized_csi": false, 00:14:52.037 "method": "bdev_nvme_attach_controller", 00:14:52.037 "req_id": 1 00:14:52.037 } 00:14:52.037 Got JSON-RPC error response 00:14:52.037 response: 00:14:52.037 { 00:14:52.037 "code": -5, 00:14:52.037 "message": "Input/output error" 00:14:52.037 } 00:14:52.037 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85421 00:14:52.037 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85421 ']' 00:14:52.037 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85421 00:14:52.037 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:52.037 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.037 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85421 00:14:52.335 killing process with pid 85421 00:14:52.335 Received shutdown signal, test time was about 10.000000 seconds 00:14:52.335 00:14:52.335 Latency(us) 00:14:52.335 [2024-11-29T16:50:16.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.335 [2024-11-29T16:50:16.127Z] =================================================================================================================== 00:14:52.335 [2024-11-29T16:50:16.127Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85421' 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85421 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85421 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85442 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85442 /var/tmp/bdevperf.sock 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85442 ']' 00:14:52.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.335 16:50:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.335 [2024-11-29 16:50:15.996843] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:52.335 [2024-11-29 16:50:15.996951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85442 ] 00:14:52.335 [2024-11-29 16:50:16.124287] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:52.595 [2024-11-29 16:50:16.154481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.595 [2024-11-29 16:50:16.178551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.595 [2024-11-29 16:50:16.211803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:52.595 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.595 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:52.595 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:52.854 [2024-11-29 16:50:16.557972] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:52.854 [2024-11-29 16:50:16.558055] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:52.854 request: 00:14:52.854 { 00:14:52.854 "name": "key0", 00:14:52.854 "path": "", 00:14:52.854 "method": "keyring_file_add_key", 00:14:52.854 "req_id": 1 00:14:52.854 } 00:14:52.854 Got JSON-RPC error response 00:14:52.854 response: 00:14:52.854 { 00:14:52.854 "code": -1, 00:14:52.854 "message": "Operation not permitted" 00:14:52.854 } 00:14:52.854 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:53.113 [2024-11-29 16:50:16.886121] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:53.113 [2024-11-29 16:50:16.886323] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:53.113 request: 00:14:53.113 { 00:14:53.113 "name": "TLSTEST", 00:14:53.113 "trtype": "tcp", 00:14:53.113 "traddr": "10.0.0.3", 00:14:53.113 "adrfam": "ipv4", 00:14:53.113 "trsvcid": "4420", 00:14:53.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.113 "prchk_reftag": false, 00:14:53.113 "prchk_guard": false, 00:14:53.113 "hdgst": false, 00:14:53.113 "ddgst": false, 00:14:53.113 "psk": "key0", 00:14:53.113 "allow_unrecognized_csi": false, 00:14:53.113 "method": "bdev_nvme_attach_controller", 00:14:53.113 "req_id": 1 00:14:53.113 } 00:14:53.113 Got JSON-RPC error response 00:14:53.113 response: 00:14:53.113 { 00:14:53.113 "code": -126, 00:14:53.113 "message": "Required key not available" 00:14:53.113 } 00:14:53.373 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85442 00:14:53.373 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85442 ']' 00:14:53.373 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85442 00:14:53.373 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:53.373 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.373 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85442 00:14:53.373 killing process with pid 85442 00:14:53.373 Received shutdown signal, test time was about 10.000000 seconds 00:14:53.373 00:14:53.373 Latency(us) 00:14:53.373 [2024-11-29T16:50:17.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.373 [2024-11-29T16:50:17.165Z] =================================================================================================================== 00:14:53.373 [2024-11-29T16:50:17.165Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:53.373 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:53.373 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:53.373 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85442' 00:14:53.373 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85442 00:14:53.373 16:50:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85442 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 85005 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85005 ']' 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85005 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85005 00:14:53.373 killing process with pid 85005 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85005' 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85005 00:14:53.373 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85005 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.br8RQ8iLS0 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.br8RQ8iLS0 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85477 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85477 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85477 ']' 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.632 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.632 [2024-11-29 16:50:17.347062] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:53.632 [2024-11-29 16:50:17.347953] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.892 [2024-11-29 16:50:17.475084] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:53.892 [2024-11-29 16:50:17.505941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.892 [2024-11-29 16:50:17.528805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.892 [2024-11-29 16:50:17.528868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.892 [2024-11-29 16:50:17.528882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.892 [2024-11-29 16:50:17.528893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.892 [2024-11-29 16:50:17.528902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.892 [2024-11-29 16:50:17.529258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.892 [2024-11-29 16:50:17.563783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:53.892 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.892 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:53.892 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:53.892 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:53.892 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.892 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.892 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.br8RQ8iLS0 00:14:53.892 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.br8RQ8iLS0 00:14:53.892 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:54.151 [2024-11-29 16:50:17.911659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.151 16:50:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:54.410 16:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:54.668 [2024-11-29 16:50:18.399803] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:54.668 [2024-11-29 16:50:18.400198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:54.668 16:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:54.927 malloc0 00:14:54.927 16:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:55.493 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.br8RQ8iLS0 00:14:56.061 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.br8RQ8iLS0 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.br8RQ8iLS0 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85532 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85532 /var/tmp/bdevperf.sock 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85532 ']' 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:56.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.320 16:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.320 [2024-11-29 16:50:19.906182] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:14:56.321 [2024-11-29 16:50:19.906432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85532 ] 00:14:56.321 [2024-11-29 16:50:20.029418] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:56.321 [2024-11-29 16:50:20.063394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.321 [2024-11-29 16:50:20.087621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.580 [2024-11-29 16:50:20.123665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:56.580 16:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.580 16:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:56.580 16:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.br8RQ8iLS0 00:14:56.839 16:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:57.098 [2024-11-29 16:50:20.655676] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:57.098 TLSTESTn1 00:14:57.098 16:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:57.098 Running I/O for 10 seconds... 00:14:59.409 4224.00 IOPS, 16.50 MiB/s [2024-11-29T16:50:24.135Z] 4181.00 IOPS, 16.33 MiB/s [2024-11-29T16:50:25.072Z] 4176.33 IOPS, 16.31 MiB/s [2024-11-29T16:50:26.028Z] 4218.25 IOPS, 16.48 MiB/s [2024-11-29T16:50:26.964Z] 4227.20 IOPS, 16.51 MiB/s [2024-11-29T16:50:27.899Z] 4217.33 IOPS, 16.47 MiB/s [2024-11-29T16:50:29.273Z] 4167.57 IOPS, 16.28 MiB/s [2024-11-29T16:50:30.207Z] 4113.00 IOPS, 16.07 MiB/s [2024-11-29T16:50:31.169Z] 4097.00 IOPS, 16.00 MiB/s [2024-11-29T16:50:31.169Z] 4098.50 IOPS, 16.01 MiB/s 00:15:07.377 Latency(us) 00:15:07.377 [2024-11-29T16:50:31.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.377 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:07.377 Verification LBA range: start 0x0 length 0x2000 00:15:07.377 TLSTESTn1 : 10.02 4104.37 16.03 0.00 0.00 31129.06 5510.98 28240.06 00:15:07.377 [2024-11-29T16:50:31.169Z] =================================================================================================================== 00:15:07.377 [2024-11-29T16:50:31.169Z] Total : 4104.37 16.03 0.00 0.00 31129.06 5510.98 28240.06 00:15:07.377 { 00:15:07.377 "results": [ 00:15:07.377 { 00:15:07.377 "job": "TLSTESTn1", 00:15:07.377 "core_mask": "0x4", 00:15:07.377 "workload": "verify", 00:15:07.377 "status": "finished", 00:15:07.377 "verify_range": { 00:15:07.378 "start": 0, 00:15:07.378 "length": 8192 00:15:07.378 }, 00:15:07.378 "queue_depth": 128, 00:15:07.378 "io_size": 4096, 00:15:07.378 "runtime": 10.016879, 00:15:07.378 "iops": 4104.372230112793, 00:15:07.378 "mibps": 16.032704023878097, 00:15:07.378 "io_failed": 0, 00:15:07.378 "io_timeout": 0, 00:15:07.378 "avg_latency_us": 31129.057299902925, 00:15:07.378 "min_latency_us": 5510.981818181818, 00:15:07.378 "max_latency_us": 28240.05818181818 00:15:07.378 } 00:15:07.378 ], 00:15:07.378 "core_count": 1 00:15:07.378 } 00:15:07.378 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:07.378 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 85532 00:15:07.378 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85532 ']' 00:15:07.378 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85532 00:15:07.378 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:07.378 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.378 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85532 00:15:07.378 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:07.378 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:07.378 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85532' 00:15:07.378 killing process with pid 85532 00:15:07.378 Received shutdown signal, test time was about 10.000000 seconds 00:15:07.378 00:15:07.378 Latency(us) 00:15:07.378 [2024-11-29T16:50:31.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.378 [2024-11-29T16:50:31.170Z] =================================================================================================================== 00:15:07.378 [2024-11-29T16:50:31.170Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:07.378 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85532 00:15:07.378 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85532 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.br8RQ8iLS0 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.br8RQ8iLS0 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.br8RQ8iLS0 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.br8RQ8iLS0 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.br8RQ8iLS0 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85660 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85660 /var/tmp/bdevperf.sock 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85660 ']' 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.378 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.378 [2024-11-29 16:50:31.096517] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:07.378 [2024-11-29 16:50:31.096802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85660 ] 00:15:07.637 [2024-11-29 16:50:31.217306] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:07.637 [2024-11-29 16:50:31.241514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.637 [2024-11-29 16:50:31.261852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.637 [2024-11-29 16:50:31.291092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:07.637 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.637 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:07.637 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.br8RQ8iLS0 00:15:07.896 [2024-11-29 16:50:31.644506] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.br8RQ8iLS0': 0100666 00:15:07.896 [2024-11-29 16:50:31.644741] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:07.896 request: 00:15:07.896 { 00:15:07.896 "name": "key0", 00:15:07.896 "path": "/tmp/tmp.br8RQ8iLS0", 00:15:07.896 "method": "keyring_file_add_key", 00:15:07.896 "req_id": 1 00:15:07.896 } 00:15:07.896 Got JSON-RPC error response 00:15:07.896 response: 00:15:07.896 { 00:15:07.896 "code": -1, 00:15:07.896 "message": "Operation not permitted" 00:15:07.896 } 00:15:07.896 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:08.155 [2024-11-29 16:50:31.912755] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.155 [2024-11-29 16:50:31.912815] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:08.155 request: 00:15:08.155 { 00:15:08.155 "name": "TLSTEST", 00:15:08.155 "trtype": "tcp", 00:15:08.155 "traddr": "10.0.0.3", 00:15:08.155 "adrfam": "ipv4", 00:15:08.155 "trsvcid": "4420", 00:15:08.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.155 "prchk_reftag": false, 00:15:08.155 "prchk_guard": false, 00:15:08.155 "hdgst": false, 00:15:08.155 "ddgst": false, 00:15:08.155 "psk": "key0", 00:15:08.155 "allow_unrecognized_csi": false, 00:15:08.155 "method": "bdev_nvme_attach_controller", 00:15:08.155 "req_id": 1 00:15:08.155 } 00:15:08.155 Got JSON-RPC error response 00:15:08.155 response: 00:15:08.155 { 00:15:08.155 "code": -126, 00:15:08.155 "message": "Required key not available" 00:15:08.155 } 00:15:08.155 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85660 00:15:08.155 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85660 ']' 00:15:08.155 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85660 00:15:08.155 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:08.155 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.155 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85660 00:15:08.414 killing process with pid 85660 00:15:08.414 Received shutdown signal, test time was about 10.000000 seconds 00:15:08.414 00:15:08.414 Latency(us) 00:15:08.414 [2024-11-29T16:50:32.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.414 [2024-11-29T16:50:32.206Z] =================================================================================================================== 00:15:08.414 [2024-11-29T16:50:32.206Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:08.414 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:08.414 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:08.414 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85660' 00:15:08.414 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85660 00:15:08.414 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85660 00:15:08.414 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:08.414 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:08.414 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:08.415 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:08.415 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:08.415 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 85477 00:15:08.415 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85477 ']' 00:15:08.415 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85477 00:15:08.415 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:08.415 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.415 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85477 00:15:08.415 killing process with pid 85477 00:15:08.415 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:08.415 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:08.415 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85477' 00:15:08.415 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85477 00:15:08.415 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85477 00:15:08.674 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:08.674 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:08.674 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:08.674 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.674 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:08.674 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85686 00:15:08.674 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85686 00:15:08.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.674 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85686 ']' 00:15:08.674 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.674 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.674 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.674 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.674 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.674 [2024-11-29 16:50:32.342623] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:08.674 [2024-11-29 16:50:32.342985] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.933 [2024-11-29 16:50:32.472766] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:08.934 [2024-11-29 16:50:32.504850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.934 [2024-11-29 16:50:32.525181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.934 [2024-11-29 16:50:32.525241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.934 [2024-11-29 16:50:32.525268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.934 [2024-11-29 16:50:32.525276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.934 [2024-11-29 16:50:32.525284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.934 [2024-11-29 16:50:32.525629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.934 [2024-11-29 16:50:32.555155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.br8RQ8iLS0 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.br8RQ8iLS0 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.br8RQ8iLS0 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.br8RQ8iLS0 00:15:08.934 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:09.192 [2024-11-29 16:50:32.969055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.450 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:09.708 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:09.966 [2024-11-29 16:50:33.645194] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:09.966 [2024-11-29 16:50:33.645482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:09.966 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:10.532 malloc0 00:15:10.532 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:10.790 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.br8RQ8iLS0 00:15:11.046 [2024-11-29 16:50:34.716977] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.br8RQ8iLS0': 0100666 00:15:11.046 [2024-11-29 16:50:34.717052] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:11.046 request: 00:15:11.046 { 00:15:11.046 "name": "key0", 00:15:11.046 "path": "/tmp/tmp.br8RQ8iLS0", 00:15:11.046 "method": "keyring_file_add_key", 00:15:11.046 "req_id": 1 00:15:11.046 } 00:15:11.046 Got JSON-RPC error response 00:15:11.046 response: 00:15:11.046 { 00:15:11.046 "code": -1, 00:15:11.046 "message": "Operation not permitted" 00:15:11.046 } 00:15:11.046 16:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:11.304 [2024-11-29 16:50:35.041134] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:11.304 [2024-11-29 16:50:35.041240] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:11.304 request: 00:15:11.304 { 00:15:11.304 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.304 "host": "nqn.2016-06.io.spdk:host1", 00:15:11.304 "psk": "key0", 00:15:11.304 "method": "nvmf_subsystem_add_host", 00:15:11.304 "req_id": 1 00:15:11.304 } 00:15:11.304 Got JSON-RPC error response 00:15:11.304 response: 00:15:11.304 { 00:15:11.304 "code": -32603, 00:15:11.304 "message": "Internal error" 00:15:11.304 } 00:15:11.304 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:11.304 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:11.304 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:11.304 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:11.304 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 85686 00:15:11.304 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85686 ']' 00:15:11.304 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85686 00:15:11.304 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:11.304 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.304 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85686 00:15:11.562 killing process with pid 85686 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85686' 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85686 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85686 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.br8RQ8iLS0 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85748 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85748 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85748 ']' 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.562 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.562 [2024-11-29 16:50:35.327251] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:11.562 [2024-11-29 16:50:35.327365] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.820 [2024-11-29 16:50:35.462464] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:11.820 [2024-11-29 16:50:35.486115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.820 [2024-11-29 16:50:35.507827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.820 [2024-11-29 16:50:35.507887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.820 [2024-11-29 16:50:35.507899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.820 [2024-11-29 16:50:35.507908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.820 [2024-11-29 16:50:35.507915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.820 [2024-11-29 16:50:35.508209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.820 [2024-11-29 16:50:35.540211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:12.079 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.079 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:12.079 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:12.079 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:12.079 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.079 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.079 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.br8RQ8iLS0 00:15:12.079 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.br8RQ8iLS0 00:15:12.079 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:12.338 [2024-11-29 16:50:35.894850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.338 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:12.596 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:12.855 [2024-11-29 16:50:36.430967] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:12.855 [2024-11-29 16:50:36.431453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:12.855 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:13.114 malloc0 00:15:13.114 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:13.373 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.br8RQ8iLS0 00:15:13.373 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:13.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.940 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=85796 00:15:13.940 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:13.940 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:13.940 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 85796 /var/tmp/bdevperf.sock 00:15:13.940 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85796 ']' 00:15:13.940 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.940 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.940 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.940 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.940 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.940 [2024-11-29 16:50:37.488543] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:13.941 [2024-11-29 16:50:37.488854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85796 ] 00:15:13.941 [2024-11-29 16:50:37.609077] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:13.941 [2024-11-29 16:50:37.643621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.941 [2024-11-29 16:50:37.668291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.941 [2024-11-29 16:50:37.701875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:14.199 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.199 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:14.199 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.br8RQ8iLS0 00:15:14.199 16:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:14.457 [2024-11-29 16:50:38.209036] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:14.716 TLSTESTn1 00:15:14.716 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:14.976 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:14.976 "subsystems": [ 00:15:14.976 { 00:15:14.976 "subsystem": "keyring", 00:15:14.976 "config": [ 00:15:14.976 { 00:15:14.976 "method": "keyring_file_add_key", 00:15:14.976 "params": { 00:15:14.976 "name": "key0", 00:15:14.976 "path": "/tmp/tmp.br8RQ8iLS0" 00:15:14.976 } 00:15:14.976 } 00:15:14.976 ] 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "subsystem": "iobuf", 00:15:14.976 "config": [ 00:15:14.976 { 00:15:14.976 "method": "iobuf_set_options", 00:15:14.976 "params": { 00:15:14.976 "small_pool_count": 8192, 00:15:14.976 "large_pool_count": 1024, 00:15:14.976 "small_bufsize": 8192, 00:15:14.976 "large_bufsize": 135168, 00:15:14.976 "enable_numa": false 00:15:14.976 } 00:15:14.976 } 00:15:14.976 ] 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "subsystem": "sock", 00:15:14.976 "config": [ 00:15:14.976 { 00:15:14.976 "method": "sock_set_default_impl", 00:15:14.976 "params": { 00:15:14.976 "impl_name": "uring" 00:15:14.976 } 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "method": "sock_impl_set_options", 00:15:14.976 "params": { 00:15:14.976 "impl_name": "ssl", 00:15:14.976 "recv_buf_size": 4096, 00:15:14.976 "send_buf_size": 4096, 00:15:14.976 "enable_recv_pipe": true, 00:15:14.976 "enable_quickack": false, 00:15:14.976 "enable_placement_id": 0, 00:15:14.976 "enable_zerocopy_send_server": true, 00:15:14.976 "enable_zerocopy_send_client": false, 00:15:14.976 "zerocopy_threshold": 0, 00:15:14.976 "tls_version": 0, 00:15:14.976 "enable_ktls": false 00:15:14.976 } 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "method": "sock_impl_set_options", 00:15:14.976 "params": { 00:15:14.976 "impl_name": "posix", 00:15:14.976 "recv_buf_size": 2097152, 00:15:14.976 "send_buf_size": 2097152, 00:15:14.976 "enable_recv_pipe": true, 00:15:14.976 "enable_quickack": false, 00:15:14.976 "enable_placement_id": 0, 00:15:14.976 "enable_zerocopy_send_server": true, 00:15:14.976 "enable_zerocopy_send_client": false, 00:15:14.976 "zerocopy_threshold": 0, 00:15:14.976 "tls_version": 0, 00:15:14.976 "enable_ktls": false 00:15:14.976 } 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "method": "sock_impl_set_options", 00:15:14.976 "params": { 00:15:14.976 "impl_name": "uring", 00:15:14.976 "recv_buf_size": 2097152, 00:15:14.976 "send_buf_size": 2097152, 00:15:14.976 "enable_recv_pipe": true, 00:15:14.976 "enable_quickack": false, 00:15:14.976 "enable_placement_id": 0, 00:15:14.976 "enable_zerocopy_send_server": false, 00:15:14.976 "enable_zerocopy_send_client": false, 00:15:14.976 "zerocopy_threshold": 0, 00:15:14.976 "tls_version": 0, 00:15:14.976 "enable_ktls": false 00:15:14.976 } 00:15:14.976 } 00:15:14.976 ] 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "subsystem": "vmd", 00:15:14.976 "config": [] 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "subsystem": "accel", 00:15:14.976 "config": [ 00:15:14.976 { 00:15:14.976 "method": "accel_set_options", 00:15:14.976 "params": { 00:15:14.976 "small_cache_size": 128, 00:15:14.976 "large_cache_size": 16, 00:15:14.976 "task_count": 2048, 00:15:14.976 "sequence_count": 2048, 00:15:14.976 "buf_count": 2048 00:15:14.976 } 00:15:14.976 } 00:15:14.976 ] 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "subsystem": "bdev", 00:15:14.976 "config": [ 00:15:14.976 { 00:15:14.976 "method": "bdev_set_options", 00:15:14.976 "params": { 00:15:14.976 "bdev_io_pool_size": 65535, 00:15:14.976 "bdev_io_cache_size": 256, 00:15:14.976 "bdev_auto_examine": true, 00:15:14.976 "iobuf_small_cache_size": 128, 00:15:14.976 "iobuf_large_cache_size": 16 00:15:14.976 } 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "method": "bdev_raid_set_options", 00:15:14.976 "params": { 00:15:14.976 "process_window_size_kb": 1024, 00:15:14.976 "process_max_bandwidth_mb_sec": 0 00:15:14.976 } 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "method": "bdev_iscsi_set_options", 00:15:14.976 "params": { 00:15:14.976 "timeout_sec": 30 00:15:14.976 } 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "method": "bdev_nvme_set_options", 00:15:14.976 "params": { 00:15:14.976 "action_on_timeout": "none", 00:15:14.976 "timeout_us": 0, 00:15:14.976 "timeout_admin_us": 0, 00:15:14.976 "keep_alive_timeout_ms": 10000, 00:15:14.976 "arbitration_burst": 0, 00:15:14.976 "low_priority_weight": 0, 00:15:14.976 "medium_priority_weight": 0, 00:15:14.976 "high_priority_weight": 0, 00:15:14.976 "nvme_adminq_poll_period_us": 10000, 00:15:14.976 "nvme_ioq_poll_period_us": 0, 00:15:14.976 "io_queue_requests": 0, 00:15:14.976 "delay_cmd_submit": true, 00:15:14.976 "transport_retry_count": 4, 00:15:14.976 "bdev_retry_count": 3, 00:15:14.976 "transport_ack_timeout": 0, 00:15:14.976 "ctrlr_loss_timeout_sec": 0, 00:15:14.976 "reconnect_delay_sec": 0, 00:15:14.976 "fast_io_fail_timeout_sec": 0, 00:15:14.976 "disable_auto_failback": false, 00:15:14.976 "generate_uuids": false, 00:15:14.976 "transport_tos": 0, 00:15:14.976 "nvme_error_stat": false, 00:15:14.976 "rdma_srq_size": 0, 00:15:14.976 "io_path_stat": false, 00:15:14.976 "allow_accel_sequence": false, 00:15:14.976 "rdma_max_cq_size": 0, 00:15:14.976 "rdma_cm_event_timeout_ms": 0, 00:15:14.976 "dhchap_digests": [ 00:15:14.976 "sha256", 00:15:14.976 "sha384", 00:15:14.976 "sha512" 00:15:14.976 ], 00:15:14.976 "dhchap_dhgroups": [ 00:15:14.976 "null", 00:15:14.976 "ffdhe2048", 00:15:14.976 "ffdhe3072", 00:15:14.976 "ffdhe4096", 00:15:14.976 "ffdhe6144", 00:15:14.976 "ffdhe8192" 00:15:14.976 ] 00:15:14.976 } 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "method": "bdev_nvme_set_hotplug", 00:15:14.976 "params": { 00:15:14.976 "period_us": 100000, 00:15:14.976 "enable": false 00:15:14.976 } 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "method": "bdev_malloc_create", 00:15:14.976 "params": { 00:15:14.976 "name": "malloc0", 00:15:14.976 "num_blocks": 8192, 00:15:14.976 "block_size": 4096, 00:15:14.976 "physical_block_size": 4096, 00:15:14.976 "uuid": "48bc47f8-4229-40d9-8b13-6c379c95a470", 00:15:14.976 "optimal_io_boundary": 0, 00:15:14.976 "md_size": 0, 00:15:14.976 "dif_type": 0, 00:15:14.976 "dif_is_head_of_md": false, 00:15:14.976 "dif_pi_format": 0 00:15:14.976 } 00:15:14.976 }, 00:15:14.976 { 00:15:14.977 "method": "bdev_wait_for_examine" 00:15:14.977 } 00:15:14.977 ] 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "subsystem": "nbd", 00:15:14.977 "config": [] 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "subsystem": "scheduler", 00:15:14.977 "config": [ 00:15:14.977 { 00:15:14.977 "method": "framework_set_scheduler", 00:15:14.977 "params": { 00:15:14.977 "name": "static" 00:15:14.977 } 00:15:14.977 } 00:15:14.977 ] 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "subsystem": "nvmf", 00:15:14.977 "config": [ 00:15:14.977 { 00:15:14.977 "method": "nvmf_set_config", 00:15:14.977 "params": { 00:15:14.977 "discovery_filter": "match_any", 00:15:14.977 "admin_cmd_passthru": { 00:15:14.977 "identify_ctrlr": false 00:15:14.977 }, 00:15:14.977 "dhchap_digests": [ 00:15:14.977 "sha256", 00:15:14.977 "sha384", 00:15:14.977 "sha512" 00:15:14.977 ], 00:15:14.977 "dhchap_dhgroups": [ 00:15:14.977 "null", 00:15:14.977 "ffdhe2048", 00:15:14.977 "ffdhe3072", 00:15:14.977 "ffdhe4096", 00:15:14.977 "ffdhe6144", 00:15:14.977 "ffdhe8192" 00:15:14.977 ] 00:15:14.977 } 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "method": "nvmf_set_max_subsystems", 00:15:14.977 "params": { 00:15:14.977 "max_subsystems": 1024 00:15:14.977 } 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "method": "nvmf_set_crdt", 00:15:14.977 "params": { 00:15:14.977 "crdt1": 0, 00:15:14.977 "crdt2": 0, 00:15:14.977 "crdt3": 0 00:15:14.977 } 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "method": "nvmf_create_transport", 00:15:14.977 "params": { 00:15:14.977 "trtype": "TCP", 00:15:14.977 "max_queue_depth": 128, 00:15:14.977 "max_io_qpairs_per_ctrlr": 127, 00:15:14.977 "in_capsule_data_size": 4096, 00:15:14.977 "max_io_size": 131072, 00:15:14.977 "io_unit_size": 131072, 00:15:14.977 "max_aq_depth": 128, 00:15:14.977 "num_shared_buffers": 511, 00:15:14.977 "buf_cache_size": 4294967295, 00:15:14.977 "dif_insert_or_strip": false, 00:15:14.977 "zcopy": false, 00:15:14.977 "c2h_success": false, 00:15:14.977 "sock_priority": 0, 00:15:14.977 "abort_timeout_sec": 1, 00:15:14.977 "ack_timeout": 0, 00:15:14.977 "data_wr_pool_size": 0 00:15:14.977 } 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "method": "nvmf_create_subsystem", 00:15:14.977 "params": { 00:15:14.977 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.977 "allow_any_host": false, 00:15:14.977 "serial_number": "SPDK00000000000001", 00:15:14.977 "model_number": "SPDK bdev Controller", 00:15:14.977 "max_namespaces": 10, 00:15:14.977 "min_cntlid": 1, 00:15:14.977 "max_cntlid": 65519, 00:15:14.977 "ana_reporting": false 00:15:14.977 } 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "method": "nvmf_subsystem_add_host", 00:15:14.977 "params": { 00:15:14.977 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.977 "host": "nqn.2016-06.io.spdk:host1", 00:15:14.977 "psk": "key0" 00:15:14.977 } 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "method": "nvmf_subsystem_add_ns", 00:15:14.977 "params": { 00:15:14.977 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.977 "namespace": { 00:15:14.977 "nsid": 1, 00:15:14.977 "bdev_name": "malloc0", 00:15:14.977 "nguid": "48BC47F8422940D98B136C379C95A470", 00:15:14.977 "uuid": "48bc47f8-4229-40d9-8b13-6c379c95a470", 00:15:14.977 "no_auto_visible": false 00:15:14.977 } 00:15:14.977 } 00:15:14.977 }, 00:15:14.977 { 00:15:14.977 "method": "nvmf_subsystem_add_listener", 00:15:14.977 "params": { 00:15:14.977 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.977 "listen_address": { 00:15:14.977 "trtype": "TCP", 00:15:14.977 "adrfam": "IPv4", 00:15:14.977 "traddr": "10.0.0.3", 00:15:14.977 "trsvcid": "4420" 00:15:14.977 }, 00:15:14.977 "secure_channel": true 00:15:14.977 } 00:15:14.977 } 00:15:14.977 ] 00:15:14.977 } 00:15:14.977 ] 00:15:14.977 }' 00:15:14.977 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:15.236 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:15.236 "subsystems": [ 00:15:15.236 { 00:15:15.236 "subsystem": "keyring", 00:15:15.236 "config": [ 00:15:15.236 { 00:15:15.236 "method": "keyring_file_add_key", 00:15:15.236 "params": { 00:15:15.236 "name": "key0", 00:15:15.236 "path": "/tmp/tmp.br8RQ8iLS0" 00:15:15.236 } 00:15:15.236 } 00:15:15.236 ] 00:15:15.236 }, 00:15:15.236 { 00:15:15.236 "subsystem": "iobuf", 00:15:15.236 "config": [ 00:15:15.236 { 00:15:15.237 "method": "iobuf_set_options", 00:15:15.237 "params": { 00:15:15.237 "small_pool_count": 8192, 00:15:15.237 "large_pool_count": 1024, 00:15:15.237 "small_bufsize": 8192, 00:15:15.237 "large_bufsize": 135168, 00:15:15.237 "enable_numa": false 00:15:15.237 } 00:15:15.237 } 00:15:15.237 ] 00:15:15.237 }, 00:15:15.237 { 00:15:15.237 "subsystem": "sock", 00:15:15.237 "config": [ 00:15:15.237 { 00:15:15.237 "method": "sock_set_default_impl", 00:15:15.237 "params": { 00:15:15.237 "impl_name": "uring" 00:15:15.237 } 00:15:15.237 }, 00:15:15.237 { 00:15:15.237 "method": "sock_impl_set_options", 00:15:15.237 "params": { 00:15:15.237 "impl_name": "ssl", 00:15:15.237 "recv_buf_size": 4096, 00:15:15.237 "send_buf_size": 4096, 00:15:15.237 "enable_recv_pipe": true, 00:15:15.237 "enable_quickack": false, 00:15:15.237 "enable_placement_id": 0, 00:15:15.237 "enable_zerocopy_send_server": true, 00:15:15.237 "enable_zerocopy_send_client": false, 00:15:15.237 "zerocopy_threshold": 0, 00:15:15.237 "tls_version": 0, 00:15:15.237 "enable_ktls": false 00:15:15.237 } 00:15:15.237 }, 00:15:15.237 { 00:15:15.237 "method": "sock_impl_set_options", 00:15:15.237 "params": { 00:15:15.237 "impl_name": "posix", 00:15:15.237 "recv_buf_size": 2097152, 00:15:15.237 "send_buf_size": 2097152, 00:15:15.237 "enable_recv_pipe": true, 00:15:15.237 "enable_quickack": false, 00:15:15.237 "enable_placement_id": 0, 00:15:15.237 "enable_zerocopy_send_server": true, 00:15:15.237 "enable_zerocopy_send_client": false, 00:15:15.237 "zerocopy_threshold": 0, 00:15:15.237 "tls_version": 0, 00:15:15.237 "enable_ktls": false 00:15:15.237 } 00:15:15.237 }, 00:15:15.237 { 00:15:15.237 "method": "sock_impl_set_options", 00:15:15.237 "params": { 00:15:15.237 "impl_name": "uring", 00:15:15.237 "recv_buf_size": 2097152, 00:15:15.237 "send_buf_size": 2097152, 00:15:15.237 "enable_recv_pipe": true, 00:15:15.237 "enable_quickack": false, 00:15:15.237 "enable_placement_id": 0, 00:15:15.237 "enable_zerocopy_send_server": false, 00:15:15.237 "enable_zerocopy_send_client": false, 00:15:15.237 "zerocopy_threshold": 0, 00:15:15.237 "tls_version": 0, 00:15:15.237 "enable_ktls": false 00:15:15.237 } 00:15:15.237 } 00:15:15.237 ] 00:15:15.237 }, 00:15:15.237 { 00:15:15.237 "subsystem": "vmd", 00:15:15.237 "config": [] 00:15:15.237 }, 00:15:15.237 { 00:15:15.237 "subsystem": "accel", 00:15:15.237 "config": [ 00:15:15.237 { 00:15:15.237 "method": "accel_set_options", 00:15:15.237 "params": { 00:15:15.237 "small_cache_size": 128, 00:15:15.237 "large_cache_size": 16, 00:15:15.237 "task_count": 2048, 00:15:15.237 "sequence_count": 2048, 00:15:15.237 "buf_count": 2048 00:15:15.237 } 00:15:15.237 } 00:15:15.237 ] 00:15:15.237 }, 00:15:15.237 { 00:15:15.237 "subsystem": "bdev", 00:15:15.237 "config": [ 00:15:15.237 { 00:15:15.237 "method": "bdev_set_options", 00:15:15.237 "params": { 00:15:15.237 "bdev_io_pool_size": 65535, 00:15:15.237 "bdev_io_cache_size": 256, 00:15:15.237 "bdev_auto_examine": true, 00:15:15.237 "iobuf_small_cache_size": 128, 00:15:15.237 "iobuf_large_cache_size": 16 00:15:15.237 } 00:15:15.237 }, 00:15:15.237 { 00:15:15.237 "method": "bdev_raid_set_options", 00:15:15.237 "params": { 00:15:15.237 "process_window_size_kb": 1024, 00:15:15.237 "process_max_bandwidth_mb_sec": 0 00:15:15.237 } 00:15:15.237 }, 00:15:15.237 { 00:15:15.237 "method": "bdev_iscsi_set_options", 00:15:15.237 "params": { 00:15:15.237 "timeout_sec": 30 00:15:15.237 } 00:15:15.237 }, 00:15:15.237 { 00:15:15.237 "method": "bdev_nvme_set_options", 00:15:15.237 "params": { 00:15:15.237 "action_on_timeout": "none", 00:15:15.237 "timeout_us": 0, 00:15:15.237 "timeout_admin_us": 0, 00:15:15.237 "keep_alive_timeout_ms": 10000, 00:15:15.237 "arbitration_burst": 0, 00:15:15.237 "low_priority_weight": 0, 00:15:15.237 "medium_priority_weight": 0, 00:15:15.237 "high_priority_weight": 0, 00:15:15.237 "nvme_adminq_poll_period_us": 10000, 00:15:15.237 "nvme_ioq_poll_period_us": 0, 00:15:15.237 "io_queue_requests": 512, 00:15:15.237 "delay_cmd_submit": true, 00:15:15.237 "transport_retry_count": 4, 00:15:15.237 "bdev_retry_count": 3, 00:15:15.237 "transport_ack_timeout": 0, 00:15:15.237 "ctrlr_loss_timeout_sec": 0, 00:15:15.237 "reconnect_delay_sec": 0, 00:15:15.237 "fast_io_fail_timeout_sec": 0, 00:15:15.237 "disable_auto_failback": false, 00:15:15.237 "generate_uuids": false, 00:15:15.237 "transport_tos": 0, 00:15:15.237 "nvme_error_stat": false, 00:15:15.237 "rdma_srq_size": 0, 00:15:15.237 "io_path_stat": false, 00:15:15.237 "allow_accel_sequence": false, 00:15:15.237 "rdma_max_cq_size": 0, 00:15:15.237 "rdma_cm_event_timeout_ms": 0, 00:15:15.237 "dhchap_digests": [ 00:15:15.237 "sha256", 00:15:15.237 "sha384", 00:15:15.237 "sha512" 00:15:15.237 ], 00:15:15.237 "dhchap_dhgroups": [ 00:15:15.237 "null", 00:15:15.237 "ffdhe2048", 00:15:15.237 "ffdhe3072", 00:15:15.237 "ffdhe4096", 00:15:15.237 "ffdhe6144", 00:15:15.237 "ffdhe8192" 00:15:15.237 ] 00:15:15.237 } 00:15:15.237 }, 00:15:15.237 { 00:15:15.237 "method": "bdev_nvme_attach_controller", 00:15:15.237 "params": { 00:15:15.237 "name": "TLSTEST", 00:15:15.237 "trtype": "TCP", 00:15:15.237 "adrfam": "IPv4", 00:15:15.237 "traddr": "10.0.0.3", 00:15:15.237 "trsvcid": "4420", 00:15:15.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.237 "prchk_reftag": false, 00:15:15.237 "prchk_guard": false, 00:15:15.237 "ctrlr_loss_timeout_sec": 0, 00:15:15.237 "reconnect_delay_sec": 0, 00:15:15.237 "fast_io_fail_timeout_sec": 0, 00:15:15.237 "psk": "key0", 00:15:15.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:15.237 "hdgst": false, 00:15:15.237 "ddgst": false, 00:15:15.237 "multipath": "multipath" 00:15:15.237 } 00:15:15.237 }, 00:15:15.237 { 00:15:15.237 "method": "bdev_nvme_set_hotplug", 00:15:15.237 "params": { 00:15:15.237 "period_us": 100000, 00:15:15.237 "enable": false 00:15:15.237 } 00:15:15.237 }, 00:15:15.237 { 00:15:15.237 "method": "bdev_wait_for_examine" 00:15:15.237 } 00:15:15.237 ] 00:15:15.237 }, 00:15:15.237 { 00:15:15.237 "subsystem": "nbd", 00:15:15.237 "config": [] 00:15:15.237 } 00:15:15.237 ] 00:15:15.237 }' 00:15:15.237 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 85796 00:15:15.237 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85796 ']' 00:15:15.237 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85796 00:15:15.237 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:15.237 16:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.237 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85796 00:15:15.497 killing process with pid 85796 00:15:15.497 Received shutdown signal, test time was about 10.000000 seconds 00:15:15.497 00:15:15.497 Latency(us) 00:15:15.497 [2024-11-29T16:50:39.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.497 [2024-11-29T16:50:39.289Z] =================================================================================================================== 00:15:15.497 [2024-11-29T16:50:39.289Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85796' 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85796 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85796 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 85748 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85748 ']' 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85748 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85748 00:15:15.497 killing process with pid 85748 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85748' 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85748 00:15:15.497 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85748 00:15:15.757 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:15.757 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:15.757 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:15.757 "subsystems": [ 00:15:15.757 { 00:15:15.757 "subsystem": "keyring", 00:15:15.757 "config": [ 00:15:15.757 { 00:15:15.757 "method": "keyring_file_add_key", 00:15:15.757 "params": { 00:15:15.757 "name": "key0", 00:15:15.757 "path": "/tmp/tmp.br8RQ8iLS0" 00:15:15.757 } 00:15:15.757 } 00:15:15.757 ] 00:15:15.757 }, 00:15:15.757 { 00:15:15.757 "subsystem": "iobuf", 00:15:15.757 "config": [ 00:15:15.757 { 00:15:15.757 "method": "iobuf_set_options", 00:15:15.757 "params": { 00:15:15.757 "small_pool_count": 8192, 00:15:15.757 "large_pool_count": 1024, 00:15:15.757 "small_bufsize": 8192, 00:15:15.757 "large_bufsize": 135168, 00:15:15.757 "enable_numa": false 00:15:15.757 } 00:15:15.757 } 00:15:15.757 ] 00:15:15.757 }, 00:15:15.757 { 00:15:15.757 "subsystem": "sock", 00:15:15.757 "config": [ 00:15:15.757 { 00:15:15.757 "method": "sock_set_default_impl", 00:15:15.757 "params": { 00:15:15.757 "impl_name": "uring" 00:15:15.757 } 00:15:15.757 }, 00:15:15.757 { 00:15:15.757 "method": "sock_impl_set_options", 00:15:15.757 "params": { 00:15:15.757 "impl_name": "ssl", 00:15:15.757 "recv_buf_size": 4096, 00:15:15.757 "send_buf_size": 4096, 00:15:15.757 "enable_recv_pipe": true, 00:15:15.757 "enable_quickack": false, 00:15:15.757 "enable_placement_id": 0, 00:15:15.757 "enable_zerocopy_send_server": true, 00:15:15.757 "enable_zerocopy_send_client": false, 00:15:15.757 "zerocopy_threshold": 0, 00:15:15.757 "tls_version": 0, 00:15:15.757 "enable_ktls": false 00:15:15.757 } 00:15:15.757 }, 00:15:15.757 { 00:15:15.757 "method": "sock_impl_set_options", 00:15:15.757 "params": { 00:15:15.757 "impl_name": "posix", 00:15:15.757 "recv_buf_size": 2097152, 00:15:15.757 "send_buf_size": 2097152, 00:15:15.757 "enable_recv_pipe": true, 00:15:15.757 "enable_quickack": false, 00:15:15.757 "enable_placement_id": 0, 00:15:15.757 "enable_zerocopy_send_server": true, 00:15:15.757 "enable_zerocopy_send_client": false, 00:15:15.757 "zerocopy_threshold": 0, 00:15:15.757 "tls_version": 0, 00:15:15.757 "enable_ktls": false 00:15:15.757 } 00:15:15.757 }, 00:15:15.757 { 00:15:15.757 "method": "sock_impl_set_options", 00:15:15.757 "params": { 00:15:15.757 "impl_name": "uring", 00:15:15.757 "recv_buf_size": 2097152, 00:15:15.757 "send_buf_size": 2097152, 00:15:15.757 "enable_recv_pipe": true, 00:15:15.757 "enable_quickack": false, 00:15:15.757 "enable_placement_id": 0, 00:15:15.757 "enable_zerocopy_send_server": false, 00:15:15.757 "enable_zerocopy_send_client": false, 00:15:15.757 "zerocopy_threshold": 0, 00:15:15.757 "tls_version": 0, 00:15:15.757 "enable_ktls": false 00:15:15.757 } 00:15:15.757 } 00:15:15.757 ] 00:15:15.757 }, 00:15:15.757 { 00:15:15.757 "subsystem": "vmd", 00:15:15.757 "config": [] 00:15:15.757 }, 00:15:15.757 { 00:15:15.757 "subsystem": "accel", 00:15:15.757 "config": [ 00:15:15.757 { 00:15:15.757 "method": "accel_set_options", 00:15:15.757 "params": { 00:15:15.757 "small_cache_size": 128, 00:15:15.757 "large_cache_size": 16, 00:15:15.757 "task_count": 2048, 00:15:15.757 "sequence_count": 2048, 00:15:15.757 "buf_count": 2048 00:15:15.757 } 00:15:15.757 } 00:15:15.757 ] 00:15:15.757 }, 00:15:15.757 { 00:15:15.757 "subsystem": "bdev", 00:15:15.757 "config": [ 00:15:15.757 { 00:15:15.757 "method": "bdev_set_options", 00:15:15.757 "params": { 00:15:15.757 "bdev_io_pool_size": 65535, 00:15:15.757 "bdev_io_cache_size": 256, 00:15:15.757 "bdev_auto_examine": true, 00:15:15.757 "iobuf_small_cache_size": 128, 00:15:15.757 "iobuf_large_cache_size": 16 00:15:15.757 } 00:15:15.757 }, 00:15:15.757 { 00:15:15.757 "method": "bdev_raid_set_options", 00:15:15.757 "params": { 00:15:15.757 "process_window_size_kb": 1024, 00:15:15.757 "process_max_bandwidth_mb_sec": 0 00:15:15.757 } 00:15:15.757 }, 00:15:15.757 { 00:15:15.757 "method": "bdev_iscsi_set_options", 00:15:15.757 "params": { 00:15:15.757 "timeout_sec": 30 00:15:15.757 } 00:15:15.757 }, 00:15:15.757 { 00:15:15.757 "method": "bdev_nvme_set_options", 00:15:15.757 "params": { 00:15:15.757 "action_on_timeout": "none", 00:15:15.757 "timeout_us": 0, 00:15:15.757 "timeout_admin_us": 0, 00:15:15.757 "keep_alive_timeout_ms": 10000, 00:15:15.757 "arbitration_burst": 0, 00:15:15.757 "low_priority_weight": 0, 00:15:15.757 "medium_priority_weight": 0, 00:15:15.757 "high_priority_weight": 0, 00:15:15.757 "nvme_adminq_poll_period_us": 10000, 00:15:15.757 "nvme_ioq_poll_period_us": 0, 00:15:15.757 "io_queue_requests": 0, 00:15:15.757 "delay_cmd_submit": true, 00:15:15.757 "transport_retry_count": 4, 00:15:15.757 "bdev_retry_count": 3, 00:15:15.757 "transport_ack_timeout": 0, 00:15:15.757 "ctrlr_loss_timeout_sec": 0, 00:15:15.757 "reconnect_delay_sec": 0, 00:15:15.757 "fast_io_fail_timeout_sec": 0, 00:15:15.757 "disable_auto_failback": false, 00:15:15.757 "generate_uuids": false, 00:15:15.757 "transport_tos": 0, 00:15:15.757 "nvme_error_stat": false, 00:15:15.757 "rdma_srq_size": 0, 00:15:15.757 "io_path_stat": false, 00:15:15.757 "allow_accel_sequence": false, 00:15:15.757 "rdma_max_cq_size": 0, 00:15:15.757 "rdma_cm_event_timeout_ms": 0, 00:15:15.757 "dhchap_digests": [ 00:15:15.757 "sha256", 00:15:15.757 "sha384", 00:15:15.757 "sha512" 00:15:15.757 ], 00:15:15.757 "dhchap_dhgroups": [ 00:15:15.757 "null", 00:15:15.757 "ffdhe2048", 00:15:15.757 "ffdhe3072", 00:15:15.757 "ffdhe4096", 00:15:15.757 "ffdhe6144", 00:15:15.757 "ffdhe8192" 00:15:15.757 ] 00:15:15.757 } 00:15:15.757 }, 00:15:15.757 { 00:15:15.757 "method": "bdev_nvme_set_hotplug", 00:15:15.757 "params": { 00:15:15.757 "period_us": 100000, 00:15:15.757 "enable": false 00:15:15.757 } 00:15:15.757 }, 00:15:15.757 { 00:15:15.757 "method": "bdev_malloc_create", 00:15:15.757 "params": { 00:15:15.757 "name": "malloc0", 00:15:15.758 "num_blocks": 8192, 00:15:15.758 "block_size": 4096, 00:15:15.758 "physical_block_size": 4096, 00:15:15.758 "uuid": "48bc47f8-4229-40d9-8b13-6c379c95a470", 00:15:15.758 "optimal_io_boundary": 0, 00:15:15.758 "md_size": 0, 00:15:15.758 "dif_type": 0, 00:15:15.758 "dif_is_head_of_md": false, 00:15:15.758 "dif_pi_format": 0 00:15:15.758 } 00:15:15.758 }, 00:15:15.758 { 00:15:15.758 "method": "bdev_wait_for_examine" 00:15:15.758 } 00:15:15.758 ] 00:15:15.758 }, 00:15:15.758 { 00:15:15.758 "subsystem": "nbd", 00:15:15.758 "config": [] 00:15:15.758 }, 00:15:15.758 { 00:15:15.758 "subsystem": "scheduler", 00:15:15.758 "config": [ 00:15:15.758 { 00:15:15.758 "method": "framework_set_scheduler", 00:15:15.758 "params": { 00:15:15.758 "name": "static" 00:15:15.758 } 00:15:15.758 } 00:15:15.758 ] 00:15:15.758 }, 00:15:15.758 { 00:15:15.758 "subsystem": "nvmf", 00:15:15.758 "config": [ 00:15:15.758 { 00:15:15.758 "method": "nvmf_set_config", 00:15:15.758 "params": { 00:15:15.758 "discovery_filter": "match_any", 00:15:15.758 "admin_cmd_passthru": { 00:15:15.758 "identify_ctrlr": false 00:15:15.758 }, 00:15:15.758 "dhchap_digests": [ 00:15:15.758 "sha256", 00:15:15.758 "sha384", 00:15:15.758 "sha512" 00:15:15.758 ], 00:15:15.758 "dhchap_dhgroups": [ 00:15:15.758 "null", 00:15:15.758 "ffdhe2048", 00:15:15.758 "ffdhe3072", 00:15:15.758 "ffdhe4096", 00:15:15.758 "ffdhe6144", 00:15:15.758 "ffdhe8192" 00:15:15.758 ] 00:15:15.758 } 00:15:15.758 }, 00:15:15.758 { 00:15:15.758 "method": "nvmf_set_max_subsystems", 00:15:15.758 "params": { 00:15:15.758 "max_subsystems": 1024 00:15:15.758 } 00:15:15.758 }, 00:15:15.758 { 00:15:15.758 "method": "nvmf_set_crdt", 00:15:15.758 "params": { 00:15:15.758 "crdt1": 0, 00:15:15.758 "crdt2": 0, 00:15:15.758 "crdt3": 0 00:15:15.758 } 00:15:15.758 }, 00:15:15.758 { 00:15:15.758 "method": "nvmf_create_transport", 00:15:15.758 "params": { 00:15:15.758 "trtype": "TCP", 00:15:15.758 "max_queue_depth": 128, 00:15:15.758 "max_io_qpairs_per_ctrlr": 127, 00:15:15.758 "in_capsule_data_size": 4096, 00:15:15.758 "max_io_size": 131072, 00:15:15.758 "io_unit_size": 131072, 00:15:15.758 "max_aq_depth": 128, 00:15:15.758 "num_shared_buffers": 511, 00:15:15.758 "buf_cache_size": 4294967295, 00:15:15.758 "dif_insert_or_strip": false, 00:15:15.758 "zcopy": false, 00:15:15.758 "c2h_success": false, 00:15:15.758 "sock_priority": 0, 00:15:15.758 "abort_timeout_sec": 1, 00:15:15.758 "ack_timeout": 0, 00:15:15.758 "data_wr_pool_size": 0 00:15:15.758 } 00:15:15.758 }, 00:15:15.758 { 00:15:15.758 "method": "nvmf_create_subsystem", 00:15:15.758 "params": { 00:15:15.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.758 "allow_any_host": false, 00:15:15.758 "serial_number": "SPDK00000000000001", 00:15:15.758 "model_number": "SPDK bdev Controller", 00:15:15.758 "max_namespaces": 10, 00:15:15.758 "min_cntlid": 1, 00:15:15.758 "max_cntlid": 65519, 00:15:15.758 "ana_reporting": false 00:15:15.758 } 00:15:15.758 }, 00:15:15.758 { 00:15:15.758 "method": "nvmf_subsystem_add_host", 00:15:15.758 "params": { 00:15:15.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.758 "host": "nqn.2016-06.io.spdk:host1", 00:15:15.758 "psk": "key0" 00:15:15.758 } 00:15:15.758 }, 00:15:15.758 { 00:15:15.758 "method": "nvmf_subsystem_add_ns", 00:15:15.758 "params": { 00:15:15.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.758 "namespace": { 00:15:15.758 "nsid": 1, 00:15:15.758 "bdev_name": "malloc0", 00:15:15.758 "nguid": "48BC47F8422940D98B136C379C95A470", 00:15:15.758 "uuid": "48bc47f8-4229-40d9-8b13-6c379c95a470", 00:15:15.758 "no_auto_visible": false 00:15:15.758 } 00:15:15.758 } 00:15:15.758 }, 00:15:15.758 { 00:15:15.758 "method": "nvmf_subsystem_add_listener", 00:15:15.758 "params": { 00:15:15.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.758 "listen_address": { 00:15:15.758 "trtype": "TCP", 00:15:15.758 "adrfam": "IPv4", 00:15:15.758 "traddr": "10.0.0.3", 00:15:15.758 "trsvcid": "4420" 00:15:15.758 }, 00:15:15.758 "secure_channel": true 00:15:15.758 } 00:15:15.758 } 00:15:15.758 ] 00:15:15.758 } 00:15:15.758 ] 00:15:15.758 }' 00:15:15.758 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:15.758 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.758 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85838 00:15:15.758 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:15.758 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85838 00:15:15.758 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85838 ']' 00:15:15.758 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.758 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.758 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.758 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.758 16:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.758 [2024-11-29 16:50:39.362268] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:15.758 [2024-11-29 16:50:39.362376] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.758 [2024-11-29 16:50:39.483312] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:15.758 [2024-11-29 16:50:39.506574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.758 [2024-11-29 16:50:39.526894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.758 [2024-11-29 16:50:39.526948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.758 [2024-11-29 16:50:39.526975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.758 [2024-11-29 16:50:39.526999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.758 [2024-11-29 16:50:39.527005] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.758 [2024-11-29 16:50:39.527352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.017 [2024-11-29 16:50:39.668755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:16.017 [2024-11-29 16:50:39.723327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.017 [2024-11-29 16:50:39.755290] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:16.017 [2024-11-29 16:50:39.755571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:16.585 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.585 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:16.585 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:16.585 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:16.585 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.845 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.845 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=85870 00:15:16.845 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 85870 /var/tmp/bdevperf.sock 00:15:16.845 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85870 ']' 00:15:16.845 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.845 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.845 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:16.845 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.845 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.845 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.845 16:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:16.845 "subsystems": [ 00:15:16.845 { 00:15:16.845 "subsystem": "keyring", 00:15:16.845 "config": [ 00:15:16.845 { 00:15:16.845 "method": "keyring_file_add_key", 00:15:16.845 "params": { 00:15:16.845 "name": "key0", 00:15:16.845 "path": "/tmp/tmp.br8RQ8iLS0" 00:15:16.845 } 00:15:16.845 } 00:15:16.845 ] 00:15:16.845 }, 00:15:16.845 { 00:15:16.845 "subsystem": "iobuf", 00:15:16.845 "config": [ 00:15:16.845 { 00:15:16.845 "method": "iobuf_set_options", 00:15:16.845 "params": { 00:15:16.845 "small_pool_count": 8192, 00:15:16.845 "large_pool_count": 1024, 00:15:16.845 "small_bufsize": 8192, 00:15:16.845 "large_bufsize": 135168, 00:15:16.845 "enable_numa": false 00:15:16.845 } 00:15:16.845 } 00:15:16.845 ] 00:15:16.845 }, 00:15:16.845 { 00:15:16.845 "subsystem": "sock", 00:15:16.845 "config": [ 00:15:16.845 { 00:15:16.845 "method": "sock_set_default_impl", 00:15:16.845 "params": { 00:15:16.845 "impl_name": "uring" 00:15:16.845 } 00:15:16.845 }, 00:15:16.845 { 00:15:16.845 "method": "sock_impl_set_options", 00:15:16.845 "params": { 00:15:16.845 "impl_name": "ssl", 00:15:16.845 "recv_buf_size": 4096, 00:15:16.845 "send_buf_size": 4096, 00:15:16.845 "enable_recv_pipe": true, 00:15:16.845 "enable_quickack": false, 00:15:16.845 "enable_placement_id": 0, 00:15:16.845 "enable_zerocopy_send_server": true, 00:15:16.845 "enable_zerocopy_send_client": false, 00:15:16.845 "zerocopy_threshold": 0, 00:15:16.845 "tls_version": 0, 00:15:16.845 "enable_ktls": false 00:15:16.845 } 00:15:16.845 }, 00:15:16.845 { 00:15:16.845 "method": "sock_impl_set_options", 00:15:16.845 "params": { 00:15:16.845 "impl_name": "posix", 00:15:16.845 "recv_buf_size": 2097152, 00:15:16.845 "send_buf_size": 2097152, 00:15:16.845 "enable_recv_pipe": true, 00:15:16.845 "enable_quickack": false, 00:15:16.845 "enable_placement_id": 0, 00:15:16.845 "enable_zerocopy_send_server": true, 00:15:16.845 "enable_zerocopy_send_client": false, 00:15:16.845 "zerocopy_threshold": 0, 00:15:16.845 "tls_version": 0, 00:15:16.845 "enable_ktls": false 00:15:16.845 } 00:15:16.845 }, 00:15:16.845 { 00:15:16.845 "method": "sock_impl_set_options", 00:15:16.845 "params": { 00:15:16.845 "impl_name": "uring", 00:15:16.845 "recv_buf_size": 2097152, 00:15:16.845 "send_buf_size": 2097152, 00:15:16.845 "enable_recv_pipe": true, 00:15:16.845 "enable_quickack": false, 00:15:16.845 "enable_placement_id": 0, 00:15:16.845 "enable_zerocopy_send_server": false, 00:15:16.845 "enable_zerocopy_send_client": false, 00:15:16.845 "zerocopy_threshold": 0, 00:15:16.845 "tls_version": 0, 00:15:16.845 "enable_ktls": false 00:15:16.845 } 00:15:16.845 } 00:15:16.845 ] 00:15:16.845 }, 00:15:16.845 { 00:15:16.845 "subsystem": "vmd", 00:15:16.845 "config": [] 00:15:16.845 }, 00:15:16.845 { 00:15:16.845 "subsystem": "accel", 00:15:16.845 "config": [ 00:15:16.845 { 00:15:16.845 "method": "accel_set_options", 00:15:16.845 "params": { 00:15:16.845 "small_cache_size": 128, 00:15:16.845 "large_cache_size": 16, 00:15:16.845 "task_count": 2048, 00:15:16.845 "sequence_count": 2048, 00:15:16.845 "buf_count": 2048 00:15:16.845 } 00:15:16.845 } 00:15:16.845 ] 00:15:16.845 }, 00:15:16.845 { 00:15:16.845 "subsystem": "bdev", 00:15:16.845 "config": [ 00:15:16.845 { 00:15:16.845 "method": "bdev_set_options", 00:15:16.845 "params": { 00:15:16.845 "bdev_io_pool_size": 65535, 00:15:16.845 "bdev_io_cache_size": 256, 00:15:16.845 "bdev_auto_examine": true, 00:15:16.845 "iobuf_small_cache_size": 128, 00:15:16.845 "iobuf_large_cache_size": 16 00:15:16.845 } 00:15:16.845 }, 00:15:16.845 { 00:15:16.845 "method": "bdev_raid_set_options", 00:15:16.845 "params": { 00:15:16.845 "process_window_size_kb": 1024, 00:15:16.845 "process_max_bandwidth_mb_sec": 0 00:15:16.845 } 00:15:16.845 }, 00:15:16.845 { 00:15:16.845 "method": "bdev_iscsi_set_options", 00:15:16.845 "params": { 00:15:16.845 "timeout_sec": 30 00:15:16.845 } 00:15:16.845 }, 00:15:16.845 { 00:15:16.845 "method": "bdev_nvme_set_options", 00:15:16.845 "params": { 00:15:16.845 "action_on_timeout": "none", 00:15:16.845 "timeout_us": 0, 00:15:16.845 "timeout_admin_us": 0, 00:15:16.845 "keep_alive_timeout_ms": 10000, 00:15:16.845 "arbitration_burst": 0, 00:15:16.845 "low_priority_weight": 0, 00:15:16.845 "medium_priority_weight": 0, 00:15:16.845 "high_priority_weight": 0, 00:15:16.845 "nvme_adminq_poll_period_us": 10000, 00:15:16.845 "nvme_ioq_poll_period_us": 0, 00:15:16.845 "io_queue_requests": 512, 00:15:16.845 "delay_cmd_submit": true, 00:15:16.845 "transport_retry_count": 4, 00:15:16.845 "bdev_retry_count": 3, 00:15:16.845 "transport_ack_timeout": 0, 00:15:16.845 "ctrlr_loss_timeout_sec": 0, 00:15:16.845 "reconnect_delay_sec": 0, 00:15:16.845 "fast_io_fail_timeout_sec": 0, 00:15:16.845 "disable_auto_failback": false, 00:15:16.845 "generate_uuids": false, 00:15:16.845 "transport_tos": 0, 00:15:16.845 "nvme_error_stat": false, 00:15:16.845 "rdma_srq_size": 0, 00:15:16.845 "io_path_stat": false, 00:15:16.845 "allow_accel_sequence": false, 00:15:16.845 "rdma_max_cq_size": 0, 00:15:16.845 "rdma_cm_event_timeout_ms": 0, 00:15:16.845 "dhchap_digests": [ 00:15:16.845 "sha256", 00:15:16.845 "sha384", 00:15:16.845 "sha512" 00:15:16.845 ], 00:15:16.845 "dhchap_dhgroups": [ 00:15:16.845 "null", 00:15:16.845 "ffdhe2048", 00:15:16.845 "ffdhe3072", 00:15:16.845 "ffdhe4096", 00:15:16.845 "ffdhe6144", 00:15:16.845 "ffdhe8192" 00:15:16.845 ] 00:15:16.845 } 00:15:16.845 }, 00:15:16.845 { 00:15:16.846 "method": "bdev_nvme_attach_controller", 00:15:16.846 "params": { 00:15:16.846 "name": "TLSTEST", 00:15:16.846 "trtype": "TCP", 00:15:16.846 "adrfam": "IPv4", 00:15:16.846 "traddr": "10.0.0.3", 00:15:16.846 "trsvcid": "4420", 00:15:16.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.846 "prchk_reftag": false, 00:15:16.846 "prchk_guard": false, 00:15:16.846 "ctrlr_loss_timeout_sec": 0, 00:15:16.846 "reconnect_delay_sec": 0, 00:15:16.846 "fast_io_fail_timeout_sec": 0, 00:15:16.846 "psk": "key0", 00:15:16.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:16.846 "hdgst": false, 00:15:16.846 "ddgst": false, 00:15:16.846 "multipath": "multipath" 00:15:16.846 } 00:15:16.846 }, 00:15:16.846 { 00:15:16.846 "method": "bdev_nvme_set_hotplug", 00:15:16.846 "params": { 00:15:16.846 "period_us": 100000, 00:15:16.846 "enable": false 00:15:16.846 } 00:15:16.846 }, 00:15:16.846 { 00:15:16.846 "method": "bdev_wait_for_examine" 00:15:16.846 } 00:15:16.846 ] 00:15:16.846 }, 00:15:16.846 { 00:15:16.846 "subsystem": "nbd", 00:15:16.846 "config": [] 00:15:16.846 } 00:15:16.846 ] 00:15:16.846 }' 00:15:16.846 [2024-11-29 16:50:40.439064] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:16.846 [2024-11-29 16:50:40.439381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85870 ] 00:15:16.846 [2024-11-29 16:50:40.568837] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:16.846 [2024-11-29 16:50:40.588681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.846 [2024-11-29 16:50:40.608544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.105 [2024-11-29 16:50:40.717909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:17.105 [2024-11-29 16:50:40.747300] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:17.670 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.670 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:17.670 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:17.929 Running I/O for 10 seconds... 00:15:19.800 4303.00 IOPS, 16.81 MiB/s [2024-11-29T16:50:44.968Z] 4328.00 IOPS, 16.91 MiB/s [2024-11-29T16:50:45.922Z] 4326.33 IOPS, 16.90 MiB/s [2024-11-29T16:50:46.883Z] 4332.25 IOPS, 16.92 MiB/s [2024-11-29T16:50:47.818Z] 4312.80 IOPS, 16.85 MiB/s [2024-11-29T16:50:48.753Z] 4276.33 IOPS, 16.70 MiB/s [2024-11-29T16:50:49.688Z] 4208.29 IOPS, 16.44 MiB/s [2024-11-29T16:50:50.623Z] 4171.75 IOPS, 16.30 MiB/s [2024-11-29T16:50:52.000Z] 4149.89 IOPS, 16.21 MiB/s [2024-11-29T16:50:52.000Z] 4137.40 IOPS, 16.16 MiB/s 00:15:28.208 Latency(us) 00:15:28.208 [2024-11-29T16:50:52.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.208 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:28.208 Verification LBA range: start 0x0 length 0x2000 00:15:28.208 TLSTESTn1 : 10.02 4143.25 16.18 0.00 0.00 30836.45 5957.82 28478.37 00:15:28.208 [2024-11-29T16:50:52.000Z] =================================================================================================================== 00:15:28.208 [2024-11-29T16:50:52.000Z] Total : 4143.25 16.18 0.00 0.00 30836.45 5957.82 28478.37 00:15:28.208 { 00:15:28.208 "results": [ 00:15:28.208 { 00:15:28.208 "job": "TLSTESTn1", 00:15:28.208 "core_mask": "0x4", 00:15:28.208 "workload": "verify", 00:15:28.208 "status": "finished", 00:15:28.208 "verify_range": { 00:15:28.208 "start": 0, 00:15:28.208 "length": 8192 00:15:28.208 }, 00:15:28.208 "queue_depth": 128, 00:15:28.208 "io_size": 4096, 00:15:28.208 "runtime": 10.016524, 00:15:28.208 "iops": 4143.253687606599, 00:15:28.208 "mibps": 16.184584717213276, 00:15:28.208 "io_failed": 0, 00:15:28.208 "io_timeout": 0, 00:15:28.208 "avg_latency_us": 30836.451184571677, 00:15:28.208 "min_latency_us": 5957.818181818182, 00:15:28.208 "max_latency_us": 28478.37090909091 00:15:28.208 } 00:15:28.208 ], 00:15:28.208 "core_count": 1 00:15:28.208 } 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 85870 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85870 ']' 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85870 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85870 00:15:28.208 killing process with pid 85870 00:15:28.208 Received shutdown signal, test time was about 10.000000 seconds 00:15:28.208 00:15:28.208 Latency(us) 00:15:28.208 [2024-11-29T16:50:52.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.208 [2024-11-29T16:50:52.000Z] =================================================================================================================== 00:15:28.208 [2024-11-29T16:50:52.000Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85870' 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85870 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85870 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 85838 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85838 ']' 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85838 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85838 00:15:28.208 killing process with pid 85838 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:28.208 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85838' 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85838 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85838 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86003 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86003 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86003 ']' 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.209 16:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.209 [2024-11-29 16:50:51.990280] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:28.209 [2024-11-29 16:50:51.991320] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.468 [2024-11-29 16:50:52.118006] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:28.468 [2024-11-29 16:50:52.147652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.468 [2024-11-29 16:50:52.170208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.468 [2024-11-29 16:50:52.170269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.468 [2024-11-29 16:50:52.170282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.468 [2024-11-29 16:50:52.170292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.468 [2024-11-29 16:50:52.170302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.468 [2024-11-29 16:50:52.170687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.468 [2024-11-29 16:50:52.203945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:28.468 16:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.468 16:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:28.468 16:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:28.468 16:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:28.468 16:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.726 16:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.726 16:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.br8RQ8iLS0 00:15:28.726 16:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.br8RQ8iLS0 00:15:28.726 16:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:28.985 [2024-11-29 16:50:52.587057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.985 16:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:29.244 16:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:29.512 [2024-11-29 16:50:53.223199] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:29.513 [2024-11-29 16:50:53.223448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:29.513 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:29.774 malloc0 00:15:29.774 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:30.339 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.br8RQ8iLS0 00:15:30.596 16:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:30.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:30.854 16:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=86051 00:15:30.854 16:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:30.854 16:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:30.854 16:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 86051 /var/tmp/bdevperf.sock 00:15:30.854 16:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86051 ']' 00:15:30.854 16:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:30.854 16:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.854 16:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:30.854 16:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.854 16:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.854 [2024-11-29 16:50:54.466605] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:30.854 [2024-11-29 16:50:54.466920] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86051 ] 00:15:30.854 [2024-11-29 16:50:54.593241] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:30.854 [2024-11-29 16:50:54.620870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.112 [2024-11-29 16:50:54.645025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.112 [2024-11-29 16:50:54.678142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:31.112 16:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.112 16:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:31.112 16:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.br8RQ8iLS0 00:15:31.370 16:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:31.627 [2024-11-29 16:50:55.332927] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:31.627 nvme0n1 00:15:31.884 16:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:31.884 Running I/O for 1 seconds... 00:15:32.820 3944.00 IOPS, 15.41 MiB/s 00:15:32.820 Latency(us) 00:15:32.820 [2024-11-29T16:50:56.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.820 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.820 Verification LBA range: start 0x0 length 0x2000 00:15:32.820 nvme0n1 : 1.03 3956.68 15.46 0.00 0.00 31938.98 10307.03 25022.84 00:15:32.820 [2024-11-29T16:50:56.612Z] =================================================================================================================== 00:15:32.820 [2024-11-29T16:50:56.612Z] Total : 3956.68 15.46 0.00 0.00 31938.98 10307.03 25022.84 00:15:32.820 { 00:15:32.820 "results": [ 00:15:32.820 { 00:15:32.820 "job": "nvme0n1", 00:15:32.820 "core_mask": "0x2", 00:15:32.820 "workload": "verify", 00:15:32.820 "status": "finished", 00:15:32.820 "verify_range": { 00:15:32.820 "start": 0, 00:15:32.820 "length": 8192 00:15:32.820 }, 00:15:32.820 "queue_depth": 128, 00:15:32.820 "io_size": 4096, 00:15:32.820 "runtime": 1.029145, 00:15:32.820 "iops": 3956.6824888621136, 00:15:32.820 "mibps": 15.455790972117631, 00:15:32.820 "io_failed": 0, 00:15:32.820 "io_timeout": 0, 00:15:32.820 "avg_latency_us": 31938.982475442044, 00:15:32.820 "min_latency_us": 10307.025454545455, 00:15:32.820 "max_latency_us": 25022.836363636365 00:15:32.820 } 00:15:32.820 ], 00:15:32.820 "core_count": 1 00:15:32.820 } 00:15:32.820 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 86051 00:15:32.820 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86051 ']' 00:15:32.820 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86051 00:15:32.820 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:32.820 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.820 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86051 00:15:33.079 killing process with pid 86051 00:15:33.079 Received shutdown signal, test time was about 1.000000 seconds 00:15:33.079 00:15:33.079 Latency(us) 00:15:33.079 [2024-11-29T16:50:56.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.079 [2024-11-29T16:50:56.871Z] =================================================================================================================== 00:15:33.079 [2024-11-29T16:50:56.871Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86051' 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86051 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86051 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 86003 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86003 ']' 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86003 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86003 00:15:33.079 killing process with pid 86003 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86003' 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86003 00:15:33.079 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86003 00:15:33.339 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:33.339 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:33.339 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.339 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.339 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86096 00:15:33.339 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:33.339 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86096 00:15:33.339 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86096 ']' 00:15:33.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.339 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.339 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.339 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.339 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.339 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.339 [2024-11-29 16:50:56.976857] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:33.339 [2024-11-29 16:50:56.977065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.339 [2024-11-29 16:50:57.099539] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:33.598 [2024-11-29 16:50:57.130267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.598 [2024-11-29 16:50:57.152949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.598 [2024-11-29 16:50:57.153031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.598 [2024-11-29 16:50:57.153045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.598 [2024-11-29 16:50:57.153055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.598 [2024-11-29 16:50:57.153064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.598 [2024-11-29 16:50:57.153464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.598 [2024-11-29 16:50:57.186742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.598 [2024-11-29 16:50:57.278066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.598 malloc0 00:15:33.598 [2024-11-29 16:50:57.305400] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:33.598 [2024-11-29 16:50:57.305649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:33.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=86120 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 86120 /var/tmp/bdevperf.sock 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86120 ']' 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.598 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.857 [2024-11-29 16:50:57.393592] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:33.858 [2024-11-29 16:50:57.393856] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86120 ] 00:15:33.858 [2024-11-29 16:50:57.520305] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:33.858 [2024-11-29 16:50:57.550826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.858 [2024-11-29 16:50:57.572564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.858 [2024-11-29 16:50:57.602889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.858 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.858 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:33.858 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.br8RQ8iLS0 00:15:34.426 16:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:34.686 [2024-11-29 16:50:58.260385] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:34.686 nvme0n1 00:15:34.686 16:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:34.946 Running I/O for 1 seconds... 00:15:35.882 3992.00 IOPS, 15.59 MiB/s 00:15:35.882 Latency(us) 00:15:35.882 [2024-11-29T16:50:59.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.882 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:35.882 Verification LBA range: start 0x0 length 0x2000 00:15:35.882 nvme0n1 : 1.02 4041.86 15.79 0.00 0.00 31260.57 2561.86 24188.74 00:15:35.882 [2024-11-29T16:50:59.674Z] =================================================================================================================== 00:15:35.882 [2024-11-29T16:50:59.674Z] Total : 4041.86 15.79 0.00 0.00 31260.57 2561.86 24188.74 00:15:35.882 { 00:15:35.882 "results": [ 00:15:35.882 { 00:15:35.882 "job": "nvme0n1", 00:15:35.882 "core_mask": "0x2", 00:15:35.882 "workload": "verify", 00:15:35.882 "status": "finished", 00:15:35.882 "verify_range": { 00:15:35.882 "start": 0, 00:15:35.882 "length": 8192 00:15:35.882 }, 00:15:35.882 "queue_depth": 128, 00:15:35.882 "io_size": 4096, 00:15:35.882 "runtime": 1.019333, 00:15:35.882 "iops": 4041.858744885136, 00:15:35.882 "mibps": 15.788510722207562, 00:15:35.882 "io_failed": 0, 00:15:35.882 "io_timeout": 0, 00:15:35.882 "avg_latency_us": 31260.570774933803, 00:15:35.882 "min_latency_us": 2561.8618181818183, 00:15:35.882 "max_latency_us": 24188.741818181818 00:15:35.882 } 00:15:35.882 ], 00:15:35.882 "core_count": 1 00:15:35.882 } 00:15:35.882 16:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:35.882 16:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.882 16:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.882 16:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.882 16:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:35.882 "subsystems": [ 00:15:35.882 { 00:15:35.882 "subsystem": "keyring", 00:15:35.882 "config": [ 00:15:35.882 { 00:15:35.882 "method": "keyring_file_add_key", 00:15:35.882 "params": { 00:15:35.882 "name": "key0", 00:15:35.883 "path": "/tmp/tmp.br8RQ8iLS0" 00:15:35.883 } 00:15:35.883 } 00:15:35.883 ] 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "subsystem": "iobuf", 00:15:35.883 "config": [ 00:15:35.883 { 00:15:35.883 "method": "iobuf_set_options", 00:15:35.883 "params": { 00:15:35.883 "small_pool_count": 8192, 00:15:35.883 "large_pool_count": 1024, 00:15:35.883 "small_bufsize": 8192, 00:15:35.883 "large_bufsize": 135168, 00:15:35.883 "enable_numa": false 00:15:35.883 } 00:15:35.883 } 00:15:35.883 ] 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "subsystem": "sock", 00:15:35.883 "config": [ 00:15:35.883 { 00:15:35.883 "method": "sock_set_default_impl", 00:15:35.883 "params": { 00:15:35.883 "impl_name": "uring" 00:15:35.883 } 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "method": "sock_impl_set_options", 00:15:35.883 "params": { 00:15:35.883 "impl_name": "ssl", 00:15:35.883 "recv_buf_size": 4096, 00:15:35.883 "send_buf_size": 4096, 00:15:35.883 "enable_recv_pipe": true, 00:15:35.883 "enable_quickack": false, 00:15:35.883 "enable_placement_id": 0, 00:15:35.883 "enable_zerocopy_send_server": true, 00:15:35.883 "enable_zerocopy_send_client": false, 00:15:35.883 "zerocopy_threshold": 0, 00:15:35.883 "tls_version": 0, 00:15:35.883 "enable_ktls": false 00:15:35.883 } 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "method": "sock_impl_set_options", 00:15:35.883 "params": { 00:15:35.883 "impl_name": "posix", 00:15:35.883 "recv_buf_size": 2097152, 00:15:35.883 "send_buf_size": 2097152, 00:15:35.883 "enable_recv_pipe": true, 00:15:35.883 "enable_quickack": false, 00:15:35.883 "enable_placement_id": 0, 00:15:35.883 "enable_zerocopy_send_server": true, 00:15:35.883 "enable_zerocopy_send_client": false, 00:15:35.883 "zerocopy_threshold": 0, 00:15:35.883 "tls_version": 0, 00:15:35.883 "enable_ktls": false 00:15:35.883 } 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "method": "sock_impl_set_options", 00:15:35.883 "params": { 00:15:35.883 "impl_name": "uring", 00:15:35.883 "recv_buf_size": 2097152, 00:15:35.883 "send_buf_size": 2097152, 00:15:35.883 "enable_recv_pipe": true, 00:15:35.883 "enable_quickack": false, 00:15:35.883 "enable_placement_id": 0, 00:15:35.883 "enable_zerocopy_send_server": false, 00:15:35.883 "enable_zerocopy_send_client": false, 00:15:35.883 "zerocopy_threshold": 0, 00:15:35.883 "tls_version": 0, 00:15:35.883 "enable_ktls": false 00:15:35.883 } 00:15:35.883 } 00:15:35.883 ] 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "subsystem": "vmd", 00:15:35.883 "config": [] 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "subsystem": "accel", 00:15:35.883 "config": [ 00:15:35.883 { 00:15:35.883 "method": "accel_set_options", 00:15:35.883 "params": { 00:15:35.883 "small_cache_size": 128, 00:15:35.883 "large_cache_size": 16, 00:15:35.883 "task_count": 2048, 00:15:35.883 "sequence_count": 2048, 00:15:35.883 "buf_count": 2048 00:15:35.883 } 00:15:35.883 } 00:15:35.883 ] 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "subsystem": "bdev", 00:15:35.883 "config": [ 00:15:35.883 { 00:15:35.883 "method": "bdev_set_options", 00:15:35.883 "params": { 00:15:35.883 "bdev_io_pool_size": 65535, 00:15:35.883 "bdev_io_cache_size": 256, 00:15:35.883 "bdev_auto_examine": true, 00:15:35.883 "iobuf_small_cache_size": 128, 00:15:35.883 "iobuf_large_cache_size": 16 00:15:35.883 } 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "method": "bdev_raid_set_options", 00:15:35.883 "params": { 00:15:35.883 "process_window_size_kb": 1024, 00:15:35.883 "process_max_bandwidth_mb_sec": 0 00:15:35.883 } 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "method": "bdev_iscsi_set_options", 00:15:35.883 "params": { 00:15:35.883 "timeout_sec": 30 00:15:35.883 } 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "method": "bdev_nvme_set_options", 00:15:35.883 "params": { 00:15:35.883 "action_on_timeout": "none", 00:15:35.883 "timeout_us": 0, 00:15:35.883 "timeout_admin_us": 0, 00:15:35.883 "keep_alive_timeout_ms": 10000, 00:15:35.883 "arbitration_burst": 0, 00:15:35.883 "low_priority_weight": 0, 00:15:35.883 "medium_priority_weight": 0, 00:15:35.883 "high_priority_weight": 0, 00:15:35.883 "nvme_adminq_poll_period_us": 10000, 00:15:35.883 "nvme_ioq_poll_period_us": 0, 00:15:35.883 "io_queue_requests": 0, 00:15:35.883 "delay_cmd_submit": true, 00:15:35.883 "transport_retry_count": 4, 00:15:35.883 "bdev_retry_count": 3, 00:15:35.883 "transport_ack_timeout": 0, 00:15:35.883 "ctrlr_loss_timeout_sec": 0, 00:15:35.883 "reconnect_delay_sec": 0, 00:15:35.883 "fast_io_fail_timeout_sec": 0, 00:15:35.883 "disable_auto_failback": false, 00:15:35.883 "generate_uuids": false, 00:15:35.883 "transport_tos": 0, 00:15:35.883 "nvme_error_stat": false, 00:15:35.883 "rdma_srq_size": 0, 00:15:35.883 "io_path_stat": false, 00:15:35.883 "allow_accel_sequence": false, 00:15:35.883 "rdma_max_cq_size": 0, 00:15:35.883 "rdma_cm_event_timeout_ms": 0, 00:15:35.883 "dhchap_digests": [ 00:15:35.883 "sha256", 00:15:35.883 "sha384", 00:15:35.883 "sha512" 00:15:35.883 ], 00:15:35.883 "dhchap_dhgroups": [ 00:15:35.883 "null", 00:15:35.883 "ffdhe2048", 00:15:35.883 "ffdhe3072", 00:15:35.883 "ffdhe4096", 00:15:35.883 "ffdhe6144", 00:15:35.883 "ffdhe8192" 00:15:35.883 ] 00:15:35.883 } 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "method": "bdev_nvme_set_hotplug", 00:15:35.883 "params": { 00:15:35.883 "period_us": 100000, 00:15:35.883 "enable": false 00:15:35.883 } 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "method": "bdev_malloc_create", 00:15:35.883 "params": { 00:15:35.883 "name": "malloc0", 00:15:35.883 "num_blocks": 8192, 00:15:35.883 "block_size": 4096, 00:15:35.883 "physical_block_size": 4096, 00:15:35.883 "uuid": "ac2154bc-e76b-4145-b7cb-554e05c19bc6", 00:15:35.883 "optimal_io_boundary": 0, 00:15:35.883 "md_size": 0, 00:15:35.883 "dif_type": 0, 00:15:35.883 "dif_is_head_of_md": false, 00:15:35.883 "dif_pi_format": 0 00:15:35.883 } 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "method": "bdev_wait_for_examine" 00:15:35.883 } 00:15:35.883 ] 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "subsystem": "nbd", 00:15:35.883 "config": [] 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "subsystem": "scheduler", 00:15:35.883 "config": [ 00:15:35.883 { 00:15:35.883 "method": "framework_set_scheduler", 00:15:35.883 "params": { 00:15:35.883 "name": "static" 00:15:35.883 } 00:15:35.883 } 00:15:35.883 ] 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "subsystem": "nvmf", 00:15:35.883 "config": [ 00:15:35.883 { 00:15:35.883 "method": "nvmf_set_config", 00:15:35.883 "params": { 00:15:35.883 "discovery_filter": "match_any", 00:15:35.883 "admin_cmd_passthru": { 00:15:35.883 "identify_ctrlr": false 00:15:35.883 }, 00:15:35.883 "dhchap_digests": [ 00:15:35.883 "sha256", 00:15:35.883 "sha384", 00:15:35.883 "sha512" 00:15:35.883 ], 00:15:35.883 "dhchap_dhgroups": [ 00:15:35.883 "null", 00:15:35.883 "ffdhe2048", 00:15:35.883 "ffdhe3072", 00:15:35.883 "ffdhe4096", 00:15:35.883 "ffdhe6144", 00:15:35.883 "ffdhe8192" 00:15:35.883 ] 00:15:35.883 } 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "method": "nvmf_set_max_subsystems", 00:15:35.883 "params": { 00:15:35.883 "max_subsystems": 1024 00:15:35.883 } 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "method": "nvmf_set_crdt", 00:15:35.883 "params": { 00:15:35.883 "crdt1": 0, 00:15:35.883 "crdt2": 0, 00:15:35.883 "crdt3": 0 00:15:35.883 } 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "method": "nvmf_create_transport", 00:15:35.883 "params": { 00:15:35.883 "trtype": "TCP", 00:15:35.883 "max_queue_depth": 128, 00:15:35.883 "max_io_qpairs_per_ctrlr": 127, 00:15:35.883 "in_capsule_data_size": 4096, 00:15:35.883 "max_io_size": 131072, 00:15:35.883 "io_unit_size": 131072, 00:15:35.883 "max_aq_depth": 128, 00:15:35.883 "num_shared_buffers": 511, 00:15:35.883 "buf_cache_size": 4294967295, 00:15:35.883 "dif_insert_or_strip": false, 00:15:35.883 "zcopy": false, 00:15:35.883 "c2h_success": false, 00:15:35.883 "sock_priority": 0, 00:15:35.883 "abort_timeout_sec": 1, 00:15:35.883 "ack_timeout": 0, 00:15:35.883 "data_wr_pool_size": 0 00:15:35.883 } 00:15:35.883 }, 00:15:35.883 { 00:15:35.883 "method": "nvmf_create_subsystem", 00:15:35.883 "params": { 00:15:35.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.884 "allow_any_host": false, 00:15:35.884 "serial_number": "00000000000000000000", 00:15:35.884 "model_number": "SPDK bdev Controller", 00:15:35.884 "max_namespaces": 32, 00:15:35.884 "min_cntlid": 1, 00:15:35.884 "max_cntlid": 65519, 00:15:35.884 "ana_reporting": false 00:15:35.884 } 00:15:35.884 }, 00:15:35.884 { 00:15:35.884 "method": "nvmf_subsystem_add_host", 00:15:35.884 "params": { 00:15:35.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.884 "host": "nqn.2016-06.io.spdk:host1", 00:15:35.884 "psk": "key0" 00:15:35.884 } 00:15:35.884 }, 00:15:35.884 { 00:15:35.884 "method": "nvmf_subsystem_add_ns", 00:15:35.884 "params": { 00:15:35.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.884 "namespace": { 00:15:35.884 "nsid": 1, 00:15:35.884 "bdev_name": "malloc0", 00:15:35.884 "nguid": "AC2154BCE76B4145B7CB554E05C19BC6", 00:15:35.884 "uuid": "ac2154bc-e76b-4145-b7cb-554e05c19bc6", 00:15:35.884 "no_auto_visible": false 00:15:35.884 } 00:15:35.884 } 00:15:35.884 }, 00:15:35.884 { 00:15:35.884 "method": "nvmf_subsystem_add_listener", 00:15:35.884 "params": { 00:15:35.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.884 "listen_address": { 00:15:35.884 "trtype": "TCP", 00:15:35.884 "adrfam": "IPv4", 00:15:35.884 "traddr": "10.0.0.3", 00:15:35.884 "trsvcid": "4420" 00:15:35.884 }, 00:15:35.884 "secure_channel": false, 00:15:35.884 "sock_impl": "ssl" 00:15:35.884 } 00:15:35.884 } 00:15:35.884 ] 00:15:35.884 } 00:15:35.884 ] 00:15:35.884 }' 00:15:35.884 16:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:36.452 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:36.452 "subsystems": [ 00:15:36.452 { 00:15:36.452 "subsystem": "keyring", 00:15:36.452 "config": [ 00:15:36.452 { 00:15:36.452 "method": "keyring_file_add_key", 00:15:36.452 "params": { 00:15:36.452 "name": "key0", 00:15:36.452 "path": "/tmp/tmp.br8RQ8iLS0" 00:15:36.452 } 00:15:36.452 } 00:15:36.452 ] 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "subsystem": "iobuf", 00:15:36.452 "config": [ 00:15:36.452 { 00:15:36.452 "method": "iobuf_set_options", 00:15:36.452 "params": { 00:15:36.452 "small_pool_count": 8192, 00:15:36.452 "large_pool_count": 1024, 00:15:36.452 "small_bufsize": 8192, 00:15:36.452 "large_bufsize": 135168, 00:15:36.452 "enable_numa": false 00:15:36.452 } 00:15:36.452 } 00:15:36.452 ] 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "subsystem": "sock", 00:15:36.452 "config": [ 00:15:36.452 { 00:15:36.452 "method": "sock_set_default_impl", 00:15:36.452 "params": { 00:15:36.452 "impl_name": "uring" 00:15:36.452 } 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "method": "sock_impl_set_options", 00:15:36.452 "params": { 00:15:36.452 "impl_name": "ssl", 00:15:36.452 "recv_buf_size": 4096, 00:15:36.452 "send_buf_size": 4096, 00:15:36.452 "enable_recv_pipe": true, 00:15:36.452 "enable_quickack": false, 00:15:36.452 "enable_placement_id": 0, 00:15:36.452 "enable_zerocopy_send_server": true, 00:15:36.452 "enable_zerocopy_send_client": false, 00:15:36.452 "zerocopy_threshold": 0, 00:15:36.452 "tls_version": 0, 00:15:36.452 "enable_ktls": false 00:15:36.452 } 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "method": "sock_impl_set_options", 00:15:36.452 "params": { 00:15:36.452 "impl_name": "posix", 00:15:36.452 "recv_buf_size": 2097152, 00:15:36.452 "send_buf_size": 2097152, 00:15:36.452 "enable_recv_pipe": true, 00:15:36.452 "enable_quickack": false, 00:15:36.452 "enable_placement_id": 0, 00:15:36.452 "enable_zerocopy_send_server": true, 00:15:36.452 "enable_zerocopy_send_client": false, 00:15:36.452 "zerocopy_threshold": 0, 00:15:36.452 "tls_version": 0, 00:15:36.452 "enable_ktls": false 00:15:36.452 } 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "method": "sock_impl_set_options", 00:15:36.452 "params": { 00:15:36.452 "impl_name": "uring", 00:15:36.452 "recv_buf_size": 2097152, 00:15:36.452 "send_buf_size": 2097152, 00:15:36.452 "enable_recv_pipe": true, 00:15:36.452 "enable_quickack": false, 00:15:36.452 "enable_placement_id": 0, 00:15:36.452 "enable_zerocopy_send_server": false, 00:15:36.452 "enable_zerocopy_send_client": false, 00:15:36.452 "zerocopy_threshold": 0, 00:15:36.452 "tls_version": 0, 00:15:36.452 "enable_ktls": false 00:15:36.452 } 00:15:36.452 } 00:15:36.452 ] 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "subsystem": "vmd", 00:15:36.452 "config": [] 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "subsystem": "accel", 00:15:36.452 "config": [ 00:15:36.452 { 00:15:36.452 "method": "accel_set_options", 00:15:36.452 "params": { 00:15:36.452 "small_cache_size": 128, 00:15:36.452 "large_cache_size": 16, 00:15:36.452 "task_count": 2048, 00:15:36.452 "sequence_count": 2048, 00:15:36.452 "buf_count": 2048 00:15:36.452 } 00:15:36.452 } 00:15:36.452 ] 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "subsystem": "bdev", 00:15:36.452 "config": [ 00:15:36.452 { 00:15:36.452 "method": "bdev_set_options", 00:15:36.452 "params": { 00:15:36.452 "bdev_io_pool_size": 65535, 00:15:36.452 "bdev_io_cache_size": 256, 00:15:36.452 "bdev_auto_examine": true, 00:15:36.452 "iobuf_small_cache_size": 128, 00:15:36.452 "iobuf_large_cache_size": 16 00:15:36.452 } 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "method": "bdev_raid_set_options", 00:15:36.452 "params": { 00:15:36.452 "process_window_size_kb": 1024, 00:15:36.452 "process_max_bandwidth_mb_sec": 0 00:15:36.452 } 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "method": "bdev_iscsi_set_options", 00:15:36.452 "params": { 00:15:36.452 "timeout_sec": 30 00:15:36.452 } 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "method": "bdev_nvme_set_options", 00:15:36.452 "params": { 00:15:36.452 "action_on_timeout": "none", 00:15:36.452 "timeout_us": 0, 00:15:36.452 "timeout_admin_us": 0, 00:15:36.452 "keep_alive_timeout_ms": 10000, 00:15:36.452 "arbitration_burst": 0, 00:15:36.452 "low_priority_weight": 0, 00:15:36.452 "medium_priority_weight": 0, 00:15:36.452 "high_priority_weight": 0, 00:15:36.452 "nvme_adminq_poll_period_us": 10000, 00:15:36.452 "nvme_ioq_poll_period_us": 0, 00:15:36.452 "io_queue_requests": 512, 00:15:36.452 "delay_cmd_submit": true, 00:15:36.452 "transport_retry_count": 4, 00:15:36.452 "bdev_retry_count": 3, 00:15:36.452 "transport_ack_timeout": 0, 00:15:36.452 "ctrlr_loss_timeout_sec": 0, 00:15:36.452 "reconnect_delay_sec": 0, 00:15:36.452 "fast_io_fail_timeout_sec": 0, 00:15:36.452 "disable_auto_failback": false, 00:15:36.452 "generate_uuids": false, 00:15:36.452 "transport_tos": 0, 00:15:36.452 "nvme_error_stat": false, 00:15:36.452 "rdma_srq_size": 0, 00:15:36.452 "io_path_stat": false, 00:15:36.452 "allow_accel_sequence": false, 00:15:36.452 "rdma_max_cq_size": 0, 00:15:36.452 "rdma_cm_event_timeout_ms": 0, 00:15:36.452 "dhchap_digests": [ 00:15:36.452 "sha256", 00:15:36.452 "sha384", 00:15:36.452 "sha512" 00:15:36.452 ], 00:15:36.452 "dhchap_dhgroups": [ 00:15:36.452 "null", 00:15:36.452 "ffdhe2048", 00:15:36.452 "ffdhe3072", 00:15:36.452 "ffdhe4096", 00:15:36.452 "ffdhe6144", 00:15:36.452 "ffdhe8192" 00:15:36.452 ] 00:15:36.452 } 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "method": "bdev_nvme_attach_controller", 00:15:36.452 "params": { 00:15:36.452 "name": "nvme0", 00:15:36.452 "trtype": "TCP", 00:15:36.452 "adrfam": "IPv4", 00:15:36.452 "traddr": "10.0.0.3", 00:15:36.452 "trsvcid": "4420", 00:15:36.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.452 "prchk_reftag": false, 00:15:36.452 "prchk_guard": false, 00:15:36.452 "ctrlr_loss_timeout_sec": 0, 00:15:36.452 "reconnect_delay_sec": 0, 00:15:36.452 "fast_io_fail_timeout_sec": 0, 00:15:36.452 "psk": "key0", 00:15:36.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.452 "hdgst": false, 00:15:36.452 "ddgst": false, 00:15:36.452 "multipath": "multipath" 00:15:36.452 } 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "method": "bdev_nvme_set_hotplug", 00:15:36.452 "params": { 00:15:36.452 "period_us": 100000, 00:15:36.452 "enable": false 00:15:36.452 } 00:15:36.452 }, 00:15:36.453 { 00:15:36.453 "method": "bdev_enable_histogram", 00:15:36.453 "params": { 00:15:36.453 "name": "nvme0n1", 00:15:36.453 "enable": true 00:15:36.453 } 00:15:36.453 }, 00:15:36.453 { 00:15:36.453 "method": "bdev_wait_for_examine" 00:15:36.453 } 00:15:36.453 ] 00:15:36.453 }, 00:15:36.453 { 00:15:36.453 "subsystem": "nbd", 00:15:36.453 "config": [] 00:15:36.453 } 00:15:36.453 ] 00:15:36.453 }' 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 86120 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86120 ']' 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86120 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86120 00:15:36.453 killing process with pid 86120 00:15:36.453 Received shutdown signal, test time was about 1.000000 seconds 00:15:36.453 00:15:36.453 Latency(us) 00:15:36.453 [2024-11-29T16:51:00.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.453 [2024-11-29T16:51:00.245Z] =================================================================================================================== 00:15:36.453 [2024-11-29T16:51:00.245Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86120' 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86120 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86120 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 86096 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86096 ']' 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86096 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86096 00:15:36.453 killing process with pid 86096 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86096' 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86096 00:15:36.453 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86096 00:15:36.712 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:36.712 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:36.712 "subsystems": [ 00:15:36.712 { 00:15:36.712 "subsystem": "keyring", 00:15:36.712 "config": [ 00:15:36.712 { 00:15:36.712 "method": "keyring_file_add_key", 00:15:36.712 "params": { 00:15:36.712 "name": "key0", 00:15:36.712 "path": "/tmp/tmp.br8RQ8iLS0" 00:15:36.712 } 00:15:36.712 } 00:15:36.712 ] 00:15:36.712 }, 00:15:36.712 { 00:15:36.712 "subsystem": "iobuf", 00:15:36.712 "config": [ 00:15:36.712 { 00:15:36.712 "method": "iobuf_set_options", 00:15:36.712 "params": { 00:15:36.712 "small_pool_count": 8192, 00:15:36.712 "large_pool_count": 1024, 00:15:36.712 "small_bufsize": 8192, 00:15:36.712 "large_bufsize": 135168, 00:15:36.712 "enable_numa": false 00:15:36.712 } 00:15:36.712 } 00:15:36.712 ] 00:15:36.712 }, 00:15:36.712 { 00:15:36.712 "subsystem": "sock", 00:15:36.712 "config": [ 00:15:36.712 { 00:15:36.712 "method": "sock_set_default_impl", 00:15:36.712 "params": { 00:15:36.712 "impl_name": "uring" 00:15:36.712 } 00:15:36.712 }, 00:15:36.712 { 00:15:36.712 "method": "sock_impl_set_options", 00:15:36.712 "params": { 00:15:36.712 "impl_name": "ssl", 00:15:36.712 "recv_buf_size": 4096, 00:15:36.712 "send_buf_size": 4096, 00:15:36.712 "enable_recv_pipe": true, 00:15:36.712 "enable_quickack": false, 00:15:36.712 "enable_placement_id": 0, 00:15:36.712 "enable_zerocopy_send_server": true, 00:15:36.712 "enable_zerocopy_send_client": false, 00:15:36.712 "zerocopy_threshold": 0, 00:15:36.712 "tls_version": 0, 00:15:36.712 "enable_ktls": false 00:15:36.712 } 00:15:36.712 }, 00:15:36.712 { 00:15:36.712 "method": "sock_impl_set_options", 00:15:36.712 "params": { 00:15:36.713 "impl_name": "posix", 00:15:36.713 "recv_buf_size": 2097152, 00:15:36.713 "send_buf_size": 2097152, 00:15:36.713 "enable_recv_pipe": true, 00:15:36.713 "enable_quickack": false, 00:15:36.713 "enable_placement_id": 0, 00:15:36.713 "enable_zerocopy_send_server": true, 00:15:36.713 "enable_zerocopy_send_client": false, 00:15:36.713 "zerocopy_threshold": 0, 00:15:36.713 "tls_version": 0, 00:15:36.713 "enable_ktls": false 00:15:36.713 } 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "method": "sock_impl_set_options", 00:15:36.713 "params": { 00:15:36.713 "impl_name": "uring", 00:15:36.713 "recv_buf_size": 2097152, 00:15:36.713 "send_buf_size": 2097152, 00:15:36.713 "enable_recv_pipe": true, 00:15:36.713 "enable_quickack": false, 00:15:36.713 "enable_placement_id": 0, 00:15:36.713 "enable_zerocopy_send_server": false, 00:15:36.713 "enable_zerocopy_send_client": false, 00:15:36.713 "zerocopy_threshold": 0, 00:15:36.713 "tls_version": 0, 00:15:36.713 "enable_ktls": false 00:15:36.713 } 00:15:36.713 } 00:15:36.713 ] 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "subsystem": "vmd", 00:15:36.713 "config": [] 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "subsystem": "accel", 00:15:36.713 "config": [ 00:15:36.713 { 00:15:36.713 "method": "accel_set_options", 00:15:36.713 "params": { 00:15:36.713 "small_cache_size": 128, 00:15:36.713 "large_cache_size": 16, 00:15:36.713 "task_count": 2048, 00:15:36.713 "sequence_count": 2048, 00:15:36.713 "buf_count": 2048 00:15:36.713 } 00:15:36.713 } 00:15:36.713 ] 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "subsystem": "bdev", 00:15:36.713 "config": [ 00:15:36.713 { 00:15:36.713 "method": "bdev_set_options", 00:15:36.713 "params": { 00:15:36.713 "bdev_io_pool_size": 65535, 00:15:36.713 "bdev_io_cache_size": 256, 00:15:36.713 "bdev_auto_examine": true, 00:15:36.713 "iobuf_small_cache_size": 128, 00:15:36.713 "iobuf_large_cache_size": 16 00:15:36.713 } 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "method": "bdev_raid_set_options", 00:15:36.713 "params": { 00:15:36.713 "process_window_size_kb": 1024, 00:15:36.713 "process_max_bandwidth_mb_sec": 0 00:15:36.713 } 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "method": "bdev_iscsi_set_options", 00:15:36.713 "params": { 00:15:36.713 "timeout_sec": 30 00:15:36.713 } 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "method": "bdev_nvme_set_options", 00:15:36.713 "params": { 00:15:36.713 "action_on_timeout": "none", 00:15:36.713 "timeout_us": 0, 00:15:36.713 "timeout_admin_us": 0, 00:15:36.713 "keep_alive_timeout_ms": 10000, 00:15:36.713 "arbitration_burst": 0, 00:15:36.713 "low_priority_weight": 0, 00:15:36.713 "medium_priority_weight": 0, 00:15:36.713 "high_priority_weight": 0, 00:15:36.713 "nvme_adminq_poll_period_us": 10000, 00:15:36.713 "nvme_ioq_poll_period_us": 0, 00:15:36.713 "io_queue_requests": 0, 00:15:36.713 "delay_cmd_submit": true, 00:15:36.713 "transport_retry_count": 4, 00:15:36.713 "bdev_retry_count": 3, 00:15:36.713 "transport_ack_timeout": 0, 00:15:36.713 "ctrlr_loss_timeout_sec": 0, 00:15:36.713 "reconnect_delay_sec": 0, 00:15:36.713 "fast_io_fail_timeout_sec": 0, 00:15:36.713 "disable_auto_failback": false, 00:15:36.713 "generate_uuids": false, 00:15:36.713 "transport_tos": 0, 00:15:36.713 "nvme_error_stat": false, 00:15:36.713 "rdma_srq_size": 0, 00:15:36.713 "io_path_stat": false, 00:15:36.713 "allow_accel_sequence": false, 00:15:36.713 "rdma_max_cq_size": 0, 00:15:36.713 "rdma_cm_event_timeout_ms": 0, 00:15:36.713 "dhchap_digests": [ 00:15:36.713 "sha256", 00:15:36.713 "sha384", 00:15:36.713 "sha512" 00:15:36.713 ], 00:15:36.713 "dhchap_dhgroups": [ 00:15:36.713 "null", 00:15:36.713 "ffdhe2048", 00:15:36.713 "ffdhe3072", 00:15:36.713 "ffdhe4096", 00:15:36.713 "ffdhe6144", 00:15:36.713 "ffdhe8192" 00:15:36.713 ] 00:15:36.713 } 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "method": "bdev_nvme_set_hotplug", 00:15:36.713 "params": { 00:15:36.713 "period_us": 100000, 00:15:36.713 "enable": false 00:15:36.713 } 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "method": "bdev_malloc_create", 00:15:36.713 "params": { 00:15:36.713 "name": "malloc0", 00:15:36.713 "num_blocks": 8192, 00:15:36.713 "block_size": 4096, 00:15:36.713 "physical_block_size": 4096, 00:15:36.713 "uuid": "ac2154bc-e76b-4145-b7cb-554e05c19bc6", 00:15:36.713 "optimal_io_boundary": 0, 00:15:36.713 "md_size": 0, 00:15:36.713 "dif_type": 0, 00:15:36.713 "dif_is_head_of_md": false, 00:15:36.713 "dif_pi_format": 0 00:15:36.713 } 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "method": "bdev_wait_for_examine" 00:15:36.713 } 00:15:36.713 ] 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "subsystem": "nbd", 00:15:36.713 "config": [] 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "subsystem": "scheduler", 00:15:36.713 "config": [ 00:15:36.713 { 00:15:36.713 "method": "framework_set_scheduler", 00:15:36.713 "params": { 00:15:36.713 "name": "static" 00:15:36.713 } 00:15:36.713 } 00:15:36.713 ] 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "subsystem": "nvmf", 00:15:36.713 "config": [ 00:15:36.713 { 00:15:36.713 "method": "nvmf_set_config", 00:15:36.713 "params": { 00:15:36.713 "discovery_filter": "match_any", 00:15:36.713 "admin_cmd_passthru": { 00:15:36.713 "identify_ctrlr": false 00:15:36.713 }, 00:15:36.713 "dhchap_digests": [ 00:15:36.713 "sha256", 00:15:36.713 "sha384", 00:15:36.713 "sha512" 00:15:36.713 ], 00:15:36.713 "dhchap_dhgroups": [ 00:15:36.713 "null", 00:15:36.713 "ffdhe2048", 00:15:36.713 "ffdhe3072", 00:15:36.713 "ffdhe4096", 00:15:36.713 "ffdhe6144", 00:15:36.713 "ffdhe8192" 00:15:36.713 ] 00:15:36.713 } 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "method": "nvmf_set_max_subsystems", 00:15:36.713 "params": { 00:15:36.713 "max_subsystems": 1024 00:15:36.713 } 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "method": "nvmf_set_crdt", 00:15:36.713 "params": { 00:15:36.713 "crdt1": 0, 00:15:36.713 "crdt2": 0, 00:15:36.713 "crdt3": 0 00:15:36.713 } 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "method": "nvmf_create_transport", 00:15:36.713 "params": { 00:15:36.713 "trtype": "TCP", 00:15:36.713 "max_queue_depth": 128, 00:15:36.713 "max_io_qpairs_per_ctrlr": 127, 00:15:36.713 "in_capsule_data_size": 4096, 00:15:36.713 "max_io_size": 131072, 00:15:36.713 "io_unit_size": 131072, 00:15:36.713 "max_aq_depth": 128, 00:15:36.713 "num_shared_buffers": 511, 00:15:36.713 "buf_cache_size": 4294967295, 00:15:36.713 "dif_insert_or_strip": false, 00:15:36.713 "zcopy": false, 00:15:36.713 "c2h_success": false, 00:15:36.713 "sock_priority": 0, 00:15:36.713 "abort_timeout_sec": 1, 00:15:36.713 "ack_timeout": 0, 00:15:36.713 "data_wr_pool_size": 0 00:15:36.713 } 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "method": "nvmf_create_subsystem", 00:15:36.713 "params": { 00:15:36.713 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.713 "allow_any_host": false, 00:15:36.713 "serial_number": "00000000000000000000", 00:15:36.713 "model_number": "SPDK bdev Controller", 00:15:36.713 "max_namespaces": 32, 00:15:36.713 "min_cntlid": 1, 00:15:36.713 "max_cntlid": 65519, 00:15:36.713 "ana_reporting": false 00:15:36.713 } 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "method": "nvmf_subsystem_add_host", 00:15:36.713 "params": { 00:15:36.713 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.713 "host": "nqn.2016-06.io.spdk:host1", 00:15:36.713 "psk": "key0" 00:15:36.713 } 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "method": "nvmf_subsystem_add_ns", 00:15:36.713 "params": { 00:15:36.713 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.713 "namespace": { 00:15:36.713 "nsid": 1, 00:15:36.713 "bdev_name": "malloc0", 00:15:36.713 "nguid": "AC2154BCE76B4145B7CB554E05C19BC6", 00:15:36.713 "uuid": "ac2154bc-e76b-4145-b7cb-554e05c19bc6", 00:15:36.713 "no_auto_visible": false 00:15:36.713 } 00:15:36.713 } 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "method": "nvmf_subsystem_add_listener", 00:15:36.713 "params": { 00:15:36.713 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.713 "listen_address": { 00:15:36.713 "trtype": "TCP", 00:15:36.713 "adrfam": "IPv4", 00:15:36.713 "traddr": "10.0.0.3", 00:15:36.713 "trsvcid": "4420" 00:15:36.713 }, 00:15:36.713 "secure_channel": false, 00:15:36.713 "sock_impl": "ssl" 00:15:36.713 } 00:15:36.713 } 00:15:36.713 ] 00:15:36.713 } 00:15:36.713 ] 00:15:36.713 }' 00:15:36.713 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:36.714 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:36.714 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.714 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86169 00:15:36.714 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:36.714 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86169 00:15:36.714 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86169 ']' 00:15:36.714 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.714 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.714 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.714 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.714 16:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.714 [2024-11-29 16:51:00.381968] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:36.714 [2024-11-29 16:51:00.382041] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.972 [2024-11-29 16:51:00.503224] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:36.972 [2024-11-29 16:51:00.530418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.973 [2024-11-29 16:51:00.549195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.973 [2024-11-29 16:51:00.549256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.973 [2024-11-29 16:51:00.549283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.973 [2024-11-29 16:51:00.549291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.973 [2024-11-29 16:51:00.549297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.973 [2024-11-29 16:51:00.549689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.973 [2024-11-29 16:51:00.692883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:36.973 [2024-11-29 16:51:00.748432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.231 [2024-11-29 16:51:00.780395] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:37.231 [2024-11-29 16:51:00.780793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:37.799 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.799 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:37.799 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:37.799 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:37.799 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:37.799 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.799 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=86201 00:15:37.799 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 86201 /var/tmp/bdevperf.sock 00:15:37.799 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86201 ']' 00:15:37.799 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:37.799 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:37.799 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.799 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:37.799 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:37.799 "subsystems": [ 00:15:37.799 { 00:15:37.799 "subsystem": "keyring", 00:15:37.799 "config": [ 00:15:37.799 { 00:15:37.799 "method": "keyring_file_add_key", 00:15:37.799 "params": { 00:15:37.799 "name": "key0", 00:15:37.799 "path": "/tmp/tmp.br8RQ8iLS0" 00:15:37.799 } 00:15:37.799 } 00:15:37.799 ] 00:15:37.799 }, 00:15:37.799 { 00:15:37.799 "subsystem": "iobuf", 00:15:37.799 "config": [ 00:15:37.799 { 00:15:37.799 "method": "iobuf_set_options", 00:15:37.799 "params": { 00:15:37.799 "small_pool_count": 8192, 00:15:37.799 "large_pool_count": 1024, 00:15:37.799 "small_bufsize": 8192, 00:15:37.799 "large_bufsize": 135168, 00:15:37.799 "enable_numa": false 00:15:37.799 } 00:15:37.799 } 00:15:37.799 ] 00:15:37.799 }, 00:15:37.799 { 00:15:37.799 "subsystem": "sock", 00:15:37.799 "config": [ 00:15:37.799 { 00:15:37.799 "method": "sock_set_default_impl", 00:15:37.799 "params": { 00:15:37.799 "impl_name": "uring" 00:15:37.799 } 00:15:37.799 }, 00:15:37.799 { 00:15:37.799 "method": "sock_impl_set_options", 00:15:37.799 "params": { 00:15:37.799 "impl_name": "ssl", 00:15:37.799 "recv_buf_size": 4096, 00:15:37.799 "send_buf_size": 4096, 00:15:37.799 "enable_recv_pipe": true, 00:15:37.799 "enable_quickack": false, 00:15:37.799 "enable_placement_id": 0, 00:15:37.799 "enable_zerocopy_send_server": true, 00:15:37.799 "enable_zerocopy_send_client": false, 00:15:37.799 "zerocopy_threshold": 0, 00:15:37.799 "tls_version": 0, 00:15:37.799 "enable_ktls": false 00:15:37.799 } 00:15:37.799 }, 00:15:37.799 { 00:15:37.799 "method": "sock_impl_set_options", 00:15:37.799 "params": { 00:15:37.799 "impl_name": "posix", 00:15:37.799 "recv_buf_size": 2097152, 00:15:37.799 "send_buf_size": 2097152, 00:15:37.799 "enable_recv_pipe": true, 00:15:37.799 "enable_quickack": false, 00:15:37.799 "enable_placement_id": 0, 00:15:37.799 "enable_zerocopy_send_server": true, 00:15:37.799 "enable_zerocopy_send_client": false, 00:15:37.799 "zerocopy_threshold": 0, 00:15:37.799 "tls_version": 0, 00:15:37.799 "enable_ktls": false 00:15:37.799 } 00:15:37.799 }, 00:15:37.799 { 00:15:37.799 "method": "sock_impl_set_options", 00:15:37.799 "params": { 00:15:37.799 "impl_name": "uring", 00:15:37.799 "recv_buf_size": 2097152, 00:15:37.799 "send_buf_size": 2097152, 00:15:37.799 "enable_recv_pipe": true, 00:15:37.799 "enable_quickack": false, 00:15:37.799 "enable_placement_id": 0, 00:15:37.799 "enable_zerocopy_send_server": false, 00:15:37.799 "enable_zerocopy_send_client": false, 00:15:37.799 "zerocopy_threshold": 0, 00:15:37.799 "tls_version": 0, 00:15:37.799 "enable_ktls": false 00:15:37.799 } 00:15:37.799 } 00:15:37.799 ] 00:15:37.799 }, 00:15:37.799 { 00:15:37.799 "subsystem": "vmd", 00:15:37.799 "config": [] 00:15:37.799 }, 00:15:37.799 { 00:15:37.799 "subsystem": "accel", 00:15:37.799 "config": [ 00:15:37.799 { 00:15:37.799 "method": "accel_set_options", 00:15:37.799 "params": { 00:15:37.799 "small_cache_size": 128, 00:15:37.799 "large_cache_size": 16, 00:15:37.799 "task_count": 2048, 00:15:37.799 "sequence_count": 2048, 00:15:37.799 "buf_count": 2048 00:15:37.799 } 00:15:37.799 } 00:15:37.799 ] 00:15:37.799 }, 00:15:37.799 { 00:15:37.799 "subsystem": "bdev", 00:15:37.799 "config": [ 00:15:37.799 { 00:15:37.799 "method": "bdev_set_options", 00:15:37.799 "params": { 00:15:37.799 "bdev_io_pool_size": 65535, 00:15:37.799 "bdev_io_cache_size": 256, 00:15:37.799 "bdev_auto_examine": true, 00:15:37.799 "iobuf_small_cache_size": 128, 00:15:37.799 "iobuf_large_cache_size": 16 00:15:37.799 } 00:15:37.799 }, 00:15:37.799 { 00:15:37.799 "method": "bdev_raid_set_options", 00:15:37.800 "params": { 00:15:37.800 "process_window_size_kb": 1024, 00:15:37.800 "process_max_bandwidth_mb_sec": 0 00:15:37.800 } 00:15:37.800 }, 00:15:37.800 { 00:15:37.800 "method": "bdev_iscsi_set_options", 00:15:37.800 "params": { 00:15:37.800 "timeout_sec": 30 00:15:37.800 } 00:15:37.800 }, 00:15:37.800 { 00:15:37.800 "method": "bdev_nvme_set_options", 00:15:37.800 "params": { 00:15:37.800 "action_on_timeout": "none", 00:15:37.800 "timeout_us": 0, 00:15:37.800 "timeout_admin_us": 0, 00:15:37.800 "keep_alive_timeout_ms": 10000, 00:15:37.800 "arbitration_burst": 0, 00:15:37.800 "low_priority_weight": 0, 00:15:37.800 "medium_priority_weight": 0, 00:15:37.800 "high_priority_weight": 0, 00:15:37.800 "nvme_adminq_poll_period_us": 10000, 00:15:37.800 "nvme_ioq_poll_period_us": 0, 00:15:37.800 "io_queue_requests": 512, 00:15:37.800 "delay_cmd_submit": true, 00:15:37.800 "transport_retry_count": 4, 00:15:37.800 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.800 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.800 "bdev_retry_count": 3, 00:15:37.800 "transport_ack_timeout": 0, 00:15:37.800 "ctrlr_loss_timeout_sec": 0, 00:15:37.800 "reconnect_delay_sec": 0, 00:15:37.800 "fast_io_fail_timeout_sec": 0, 00:15:37.800 "disable_auto_failback": false, 00:15:37.800 "generate_uuids": false, 00:15:37.800 "transport_tos": 0, 00:15:37.800 "nvme_error_stat": false, 00:15:37.800 "rdma_srq_size": 0, 00:15:37.800 "io_path_stat": false, 00:15:37.800 "allow_accel_sequence": false, 00:15:37.800 "rdma_max_cq_size": 0, 00:15:37.800 "rdma_cm_event_timeout_ms": 0, 00:15:37.800 "dhchap_digests": [ 00:15:37.800 "sha256", 00:15:37.800 "sha384", 00:15:37.800 "sha512" 00:15:37.800 ], 00:15:37.800 "dhchap_dhgroups": [ 00:15:37.800 "null", 00:15:37.800 "ffdhe2048", 00:15:37.800 "ffdhe3072", 00:15:37.800 "ffdhe4096", 00:15:37.800 "ffdhe6144", 00:15:37.800 "ffdhe8192" 00:15:37.800 ] 00:15:37.800 } 00:15:37.800 }, 00:15:37.800 { 00:15:37.800 "method": "bdev_nvme_attach_controller", 00:15:37.800 "params": { 00:15:37.800 "name": "nvme0", 00:15:37.800 "trtype": "TCP", 00:15:37.800 "adrfam": "IPv4", 00:15:37.800 "traddr": "10.0.0.3", 00:15:37.800 "trsvcid": "4420", 00:15:37.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.800 "prchk_reftag": false, 00:15:37.800 "prchk_guard": false, 00:15:37.800 "ctrlr_loss_timeout_sec": 0, 00:15:37.800 "reconnect_delay_sec": 0, 00:15:37.800 "fast_io_fail_timeout_sec": 0, 00:15:37.800 "psk": "key0", 00:15:37.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.800 "hdgst": false, 00:15:37.800 "ddgst": false, 00:15:37.800 "multipath": "multipath" 00:15:37.800 } 00:15:37.800 }, 00:15:37.800 { 00:15:37.800 "method": "bdev_nvme_set_hotplug", 00:15:37.800 "params": { 00:15:37.800 "period_us": 100000, 00:15:37.800 "enable": false 00:15:37.800 } 00:15:37.800 }, 00:15:37.800 { 00:15:37.800 "method": "bdev_enable_histogram", 00:15:37.800 "params": { 00:15:37.800 "name": "nvme0n1", 00:15:37.800 "enable": true 00:15:37.800 } 00:15:37.800 }, 00:15:37.800 { 00:15:37.800 "method": "bdev_wait_for_examine" 00:15:37.800 } 00:15:37.800 ] 00:15:37.800 }, 00:15:37.800 { 00:15:37.800 "subsystem": "nbd", 00:15:37.800 "config": [] 00:15:37.800 } 00:15:37.800 ] 00:15:37.800 }' 00:15:37.800 [2024-11-29 16:51:01.516989] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:37.800 [2024-11-29 16:51:01.517236] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86201 ] 00:15:38.070 [2024-11-29 16:51:01.638770] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:38.070 [2024-11-29 16:51:01.671865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.070 [2024-11-29 16:51:01.696379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.070 [2024-11-29 16:51:01.812051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:38.070 [2024-11-29 16:51:01.843972] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:39.022 16:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.022 16:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:39.022 16:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:39.022 16:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:39.280 16:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.281 16:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:39.281 Running I/O for 1 seconds... 00:15:40.217 3825.00 IOPS, 14.94 MiB/s 00:15:40.217 Latency(us) 00:15:40.217 [2024-11-29T16:51:04.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.217 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:40.217 Verification LBA range: start 0x0 length 0x2000 00:15:40.217 nvme0n1 : 1.04 3815.82 14.91 0.00 0.00 33078.08 7566.43 21924.77 00:15:40.217 [2024-11-29T16:51:04.009Z] =================================================================================================================== 00:15:40.217 [2024-11-29T16:51:04.009Z] Total : 3815.82 14.91 0.00 0.00 33078.08 7566.43 21924.77 00:15:40.217 { 00:15:40.217 "results": [ 00:15:40.217 { 00:15:40.217 "job": "nvme0n1", 00:15:40.217 "core_mask": "0x2", 00:15:40.217 "workload": "verify", 00:15:40.217 "status": "finished", 00:15:40.217 "verify_range": { 00:15:40.217 "start": 0, 00:15:40.217 "length": 8192 00:15:40.217 }, 00:15:40.217 "queue_depth": 128, 00:15:40.217 "io_size": 4096, 00:15:40.217 "runtime": 1.036212, 00:15:40.217 "iops": 3815.821472826024, 00:15:40.217 "mibps": 14.905552628226657, 00:15:40.217 "io_failed": 0, 00:15:40.217 "io_timeout": 0, 00:15:40.217 "avg_latency_us": 33078.07701292132, 00:15:40.217 "min_latency_us": 7566.4290909090905, 00:15:40.217 "max_latency_us": 21924.77090909091 00:15:40.217 } 00:15:40.217 ], 00:15:40.217 "core_count": 1 00:15:40.217 } 00:15:40.217 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:40.217 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:40.217 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:40.217 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:15:40.217 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:15:40.217 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:40.477 nvmf_trace.0 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 86201 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86201 ']' 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86201 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86201 00:15:40.477 killing process with pid 86201 00:15:40.477 Received shutdown signal, test time was about 1.000000 seconds 00:15:40.477 00:15:40.477 Latency(us) 00:15:40.477 [2024-11-29T16:51:04.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.477 [2024-11-29T16:51:04.269Z] =================================================================================================================== 00:15:40.477 [2024-11-29T16:51:04.269Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86201' 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86201 00:15:40.477 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86201 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:40.736 rmmod nvme_tcp 00:15:40.736 rmmod nvme_fabrics 00:15:40.736 rmmod nvme_keyring 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 86169 ']' 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 86169 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86169 ']' 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86169 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86169 00:15:40.736 killing process with pid 86169 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86169' 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86169 00:15:40.736 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86169 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.996 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.256 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:41.256 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.LHI93z8cMi /tmp/tmp.F7xqEuLzX7 /tmp/tmp.br8RQ8iLS0 00:15:41.256 ************************************ 00:15:41.256 END TEST nvmf_tls 00:15:41.256 ************************************ 00:15:41.256 00:15:41.256 real 1m21.191s 00:15:41.256 user 2m14.353s 00:15:41.256 sys 0m25.589s 00:15:41.256 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.256 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:41.256 16:51:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:41.256 16:51:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:41.256 16:51:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.256 16:51:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:41.256 ************************************ 00:15:41.256 START TEST nvmf_fips 00:15:41.256 ************************************ 00:15:41.256 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:41.256 * Looking for test storage... 00:15:41.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:41.256 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:41.256 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:15:41.256 16:51:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:41.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.519 --rc genhtml_branch_coverage=1 00:15:41.519 --rc genhtml_function_coverage=1 00:15:41.519 --rc genhtml_legend=1 00:15:41.519 --rc geninfo_all_blocks=1 00:15:41.519 --rc geninfo_unexecuted_blocks=1 00:15:41.519 00:15:41.519 ' 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:41.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.519 --rc genhtml_branch_coverage=1 00:15:41.519 --rc genhtml_function_coverage=1 00:15:41.519 --rc genhtml_legend=1 00:15:41.519 --rc geninfo_all_blocks=1 00:15:41.519 --rc geninfo_unexecuted_blocks=1 00:15:41.519 00:15:41.519 ' 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:41.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.519 --rc genhtml_branch_coverage=1 00:15:41.519 --rc genhtml_function_coverage=1 00:15:41.519 --rc genhtml_legend=1 00:15:41.519 --rc geninfo_all_blocks=1 00:15:41.519 --rc geninfo_unexecuted_blocks=1 00:15:41.519 00:15:41.519 ' 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:41.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.519 --rc genhtml_branch_coverage=1 00:15:41.519 --rc genhtml_function_coverage=1 00:15:41.519 --rc genhtml_legend=1 00:15:41.519 --rc geninfo_all_blocks=1 00:15:41.519 --rc geninfo_unexecuted_blocks=1 00:15:41.519 00:15:41.519 ' 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.519 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:41.520 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:15:41.520 Error setting digest 00:15:41.520 40D2F9750E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:41.520 40D2F9750E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.520 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:41.521 Cannot find device "nvmf_init_br" 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:41.521 Cannot find device "nvmf_init_br2" 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:41.521 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:41.779 Cannot find device "nvmf_tgt_br" 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.779 Cannot find device "nvmf_tgt_br2" 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:41.779 Cannot find device "nvmf_init_br" 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:41.779 Cannot find device "nvmf_init_br2" 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:41.779 Cannot find device "nvmf_tgt_br" 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:41.779 Cannot find device "nvmf_tgt_br2" 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:41.779 Cannot find device "nvmf_br" 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:41.779 Cannot find device "nvmf_init_if" 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:41.779 Cannot find device "nvmf_init_if2" 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.779 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.779 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:41.779 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:42.039 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:42.039 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:42.039 00:15:42.039 --- 10.0.0.3 ping statistics --- 00:15:42.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.039 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:42.039 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:42.039 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:42.039 00:15:42.039 --- 10.0.0.4 ping statistics --- 00:15:42.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.039 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:42.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:42.039 00:15:42.039 --- 10.0.0.1 ping statistics --- 00:15:42.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.039 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:42.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:15:42.039 00:15:42.039 --- 10.0.0.2 ping statistics --- 00:15:42.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.039 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=86524 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 86524 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 86524 ']' 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.039 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:42.039 [2024-11-29 16:51:05.762641] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:42.039 [2024-11-29 16:51:05.762733] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.299 [2024-11-29 16:51:05.889623] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:42.299 [2024-11-29 16:51:05.917676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.299 [2024-11-29 16:51:05.937142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.299 [2024-11-29 16:51:05.937213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.299 [2024-11-29 16:51:05.937239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.299 [2024-11-29 16:51:05.937246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.299 [2024-11-29 16:51:05.937253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.299 [2024-11-29 16:51:05.937559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.299 [2024-11-29 16:51:05.966488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.qkL 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.qkL 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.qkL 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.qkL 00:15:42.299 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.867 [2024-11-29 16:51:06.355028] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.867 [2024-11-29 16:51:06.370930] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:42.867 [2024-11-29 16:51:06.371184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:42.867 malloc0 00:15:42.867 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:42.867 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=86558 00:15:42.867 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:42.867 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 86558 /var/tmp/bdevperf.sock 00:15:42.867 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 86558 ']' 00:15:42.867 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:42.867 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:42.868 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:42.868 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.868 16:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:42.868 [2024-11-29 16:51:06.554317] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:42.868 [2024-11-29 16:51:06.554471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86558 ] 00:15:43.126 [2024-11-29 16:51:06.680788] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:43.126 [2024-11-29 16:51:06.714356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.126 [2024-11-29 16:51:06.740619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.126 [2024-11-29 16:51:06.777131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:44.063 16:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.063 16:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:44.063 16:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.qkL 00:15:44.063 16:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:44.322 [2024-11-29 16:51:08.003077] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:44.322 TLSTESTn1 00:15:44.322 16:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:44.581 Running I/O for 10 seconds... 00:15:46.452 4032.00 IOPS, 15.75 MiB/s [2024-11-29T16:51:11.621Z] 4125.50 IOPS, 16.12 MiB/s [2024-11-29T16:51:12.558Z] 4118.33 IOPS, 16.09 MiB/s [2024-11-29T16:51:13.494Z] 4168.50 IOPS, 16.28 MiB/s [2024-11-29T16:51:14.431Z] 4184.60 IOPS, 16.35 MiB/s [2024-11-29T16:51:15.369Z] 4166.17 IOPS, 16.27 MiB/s [2024-11-29T16:51:16.318Z] 4146.00 IOPS, 16.20 MiB/s [2024-11-29T16:51:17.254Z] 4164.12 IOPS, 16.27 MiB/s [2024-11-29T16:51:18.634Z] 4158.22 IOPS, 16.24 MiB/s [2024-11-29T16:51:18.634Z] 4136.90 IOPS, 16.16 MiB/s 00:15:54.842 Latency(us) 00:15:54.842 [2024-11-29T16:51:18.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.842 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:54.842 Verification LBA range: start 0x0 length 0x2000 00:15:54.842 TLSTESTn1 : 10.02 4142.75 16.18 0.00 0.00 30842.45 4736.47 29669.93 00:15:54.842 [2024-11-29T16:51:18.634Z] =================================================================================================================== 00:15:54.842 [2024-11-29T16:51:18.634Z] Total : 4142.75 16.18 0.00 0.00 30842.45 4736.47 29669.93 00:15:54.842 { 00:15:54.842 "results": [ 00:15:54.842 { 00:15:54.842 "job": "TLSTESTn1", 00:15:54.842 "core_mask": "0x4", 00:15:54.842 "workload": "verify", 00:15:54.842 "status": "finished", 00:15:54.842 "verify_range": { 00:15:54.842 "start": 0, 00:15:54.842 "length": 8192 00:15:54.842 }, 00:15:54.842 "queue_depth": 128, 00:15:54.842 "io_size": 4096, 00:15:54.842 "runtime": 10.015807, 00:15:54.842 "iops": 4142.751552620773, 00:15:54.842 "mibps": 16.182623252424893, 00:15:54.842 "io_failed": 0, 00:15:54.842 "io_timeout": 0, 00:15:54.842 "avg_latency_us": 30842.450767292623, 00:15:54.842 "min_latency_us": 4736.465454545454, 00:15:54.842 "max_latency_us": 29669.934545454544 00:15:54.842 } 00:15:54.842 ], 00:15:54.842 "core_count": 1 00:15:54.842 } 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:54.842 nvmf_trace.0 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 86558 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 86558 ']' 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 86558 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86558 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:54.842 killing process with pid 86558 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86558' 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 86558 00:15:54.842 Received shutdown signal, test time was about 10.000000 seconds 00:15:54.842 00:15:54.842 Latency(us) 00:15:54.842 [2024-11-29T16:51:18.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.842 [2024-11-29T16:51:18.634Z] =================================================================================================================== 00:15:54.842 [2024-11-29T16:51:18.634Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 86558 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:54.842 rmmod nvme_tcp 00:15:54.842 rmmod nvme_fabrics 00:15:54.842 rmmod nvme_keyring 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:54.842 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:54.843 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 86524 ']' 00:15:54.843 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 86524 00:15:54.843 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 86524 ']' 00:15:54.843 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 86524 00:15:54.843 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:54.843 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.843 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86524 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:55.101 killing process with pid 86524 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86524' 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 86524 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 86524 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:55.101 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:55.360 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:55.360 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:55.360 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:55.360 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:55.360 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.360 16:51:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.360 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:55.360 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.360 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.360 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.360 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:55.360 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.qkL 00:15:55.360 ************************************ 00:15:55.360 END TEST nvmf_fips 00:15:55.360 ************************************ 00:15:55.360 00:15:55.360 real 0m14.184s 00:15:55.360 user 0m20.029s 00:15:55.360 sys 0m5.678s 00:15:55.360 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.360 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:55.360 16:51:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:55.360 16:51:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:55.360 16:51:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.360 16:51:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:55.360 ************************************ 00:15:55.360 START TEST nvmf_control_msg_list 00:15:55.360 ************************************ 00:15:55.360 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:55.621 * Looking for test storage... 00:15:55.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:55.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.621 --rc genhtml_branch_coverage=1 00:15:55.621 --rc genhtml_function_coverage=1 00:15:55.621 --rc genhtml_legend=1 00:15:55.621 --rc geninfo_all_blocks=1 00:15:55.621 --rc geninfo_unexecuted_blocks=1 00:15:55.621 00:15:55.621 ' 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:55.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.621 --rc genhtml_branch_coverage=1 00:15:55.621 --rc genhtml_function_coverage=1 00:15:55.621 --rc genhtml_legend=1 00:15:55.621 --rc geninfo_all_blocks=1 00:15:55.621 --rc geninfo_unexecuted_blocks=1 00:15:55.621 00:15:55.621 ' 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:55.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.621 --rc genhtml_branch_coverage=1 00:15:55.621 --rc genhtml_function_coverage=1 00:15:55.621 --rc genhtml_legend=1 00:15:55.621 --rc geninfo_all_blocks=1 00:15:55.621 --rc geninfo_unexecuted_blocks=1 00:15:55.621 00:15:55.621 ' 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:55.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.621 --rc genhtml_branch_coverage=1 00:15:55.621 --rc genhtml_function_coverage=1 00:15:55.621 --rc genhtml_legend=1 00:15:55.621 --rc geninfo_all_blocks=1 00:15:55.621 --rc geninfo_unexecuted_blocks=1 00:15:55.621 00:15:55.621 ' 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.621 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:55.622 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:55.622 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:55.623 Cannot find device "nvmf_init_br" 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:55.623 Cannot find device "nvmf_init_br2" 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:55.623 Cannot find device "nvmf_tgt_br" 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.623 Cannot find device "nvmf_tgt_br2" 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:55.623 Cannot find device "nvmf_init_br" 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:55.623 Cannot find device "nvmf_init_br2" 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:55.623 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:55.623 Cannot find device "nvmf_tgt_br" 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:55.883 Cannot find device "nvmf_tgt_br2" 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:55.883 Cannot find device "nvmf_br" 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:55.883 Cannot find device "nvmf_init_if" 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:55.883 Cannot find device "nvmf_init_if2" 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:55.883 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:56.143 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.143 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:15:56.143 00:15:56.143 --- 10.0.0.3 ping statistics --- 00:15:56.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.143 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:56.143 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:56.143 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:15:56.143 00:15:56.143 --- 10.0.0.4 ping statistics --- 00:15:56.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.143 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:56.143 00:15:56.143 --- 10.0.0.1 ping statistics --- 00:15:56.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.143 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:56.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:15:56.143 00:15:56.143 --- 10.0.0.2 ping statistics --- 00:15:56.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.143 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=86941 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 86941 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 86941 ']' 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.143 16:51:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.143 [2024-11-29 16:51:19.828239] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:56.143 [2024-11-29 16:51:19.828366] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.403 [2024-11-29 16:51:19.956664] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:56.403 [2024-11-29 16:51:19.987037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.403 [2024-11-29 16:51:20.010393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.403 [2024-11-29 16:51:20.010467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.403 [2024-11-29 16:51:20.010495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.403 [2024-11-29 16:51:20.010505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.403 [2024-11-29 16:51:20.010513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.403 [2024-11-29 16:51:20.010874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.403 [2024-11-29 16:51:20.045168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.403 [2024-11-29 16:51:20.145112] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.403 Malloc0 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.403 [2024-11-29 16:51:20.180974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.403 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=86971 00:15:56.404 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:56.404 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=86972 00:15:56.404 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:56.404 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=86973 00:15:56.404 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:56.404 16:51:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 86971 00:15:56.663 [2024-11-29 16:51:20.379702] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:56.663 [2024-11-29 16:51:20.379941] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:56.663 [2024-11-29 16:51:20.389556] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:58.043 Initializing NVMe Controllers 00:15:58.043 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:58.043 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:58.043 Initialization complete. Launching workers. 00:15:58.043 ======================================================== 00:15:58.043 Latency(us) 00:15:58.043 Device Information : IOPS MiB/s Average min max 00:15:58.043 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3363.00 13.14 296.95 192.51 873.19 00:15:58.043 ======================================================== 00:15:58.043 Total : 3363.00 13.14 296.95 192.51 873.19 00:15:58.043 00:15:58.043 Initializing NVMe Controllers 00:15:58.043 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:58.043 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:58.043 Initialization complete. Launching workers. 00:15:58.043 ======================================================== 00:15:58.043 Latency(us) 00:15:58.043 Device Information : IOPS MiB/s Average min max 00:15:58.043 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3359.00 13.12 297.30 192.87 873.13 00:15:58.043 ======================================================== 00:15:58.043 Total : 3359.00 13.12 297.30 192.87 873.13 00:15:58.043 00:15:58.043 Initializing NVMe Controllers 00:15:58.043 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:58.043 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:58.043 Initialization complete. Launching workers. 00:15:58.043 ======================================================== 00:15:58.043 Latency(us) 00:15:58.043 Device Information : IOPS MiB/s Average min max 00:15:58.043 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3383.98 13.22 295.14 115.04 888.28 00:15:58.043 ======================================================== 00:15:58.043 Total : 3383.98 13.22 295.14 115.04 888.28 00:15:58.043 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 86972 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 86973 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:58.044 rmmod nvme_tcp 00:15:58.044 rmmod nvme_fabrics 00:15:58.044 rmmod nvme_keyring 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 86941 ']' 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 86941 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 86941 ']' 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 86941 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86941 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:58.044 killing process with pid 86941 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86941' 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 86941 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 86941 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:58.044 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:58.303 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:58.303 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:58.303 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.303 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.303 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:58.303 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.303 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:58.303 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.303 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:58.303 00:15:58.303 real 0m2.870s 00:15:58.303 user 0m4.750s 00:15:58.303 sys 0m1.343s 00:15:58.303 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.303 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:58.303 ************************************ 00:15:58.303 END TEST nvmf_control_msg_list 00:15:58.303 ************************************ 00:15:58.303 16:51:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:58.303 16:51:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:58.303 16:51:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.303 16:51:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:58.303 ************************************ 00:15:58.303 START TEST nvmf_wait_for_buf 00:15:58.303 ************************************ 00:15:58.303 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:58.563 * Looking for test storage... 00:15:58.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:58.563 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:58.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.564 --rc genhtml_branch_coverage=1 00:15:58.564 --rc genhtml_function_coverage=1 00:15:58.564 --rc genhtml_legend=1 00:15:58.564 --rc geninfo_all_blocks=1 00:15:58.564 --rc geninfo_unexecuted_blocks=1 00:15:58.564 00:15:58.564 ' 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:58.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.564 --rc genhtml_branch_coverage=1 00:15:58.564 --rc genhtml_function_coverage=1 00:15:58.564 --rc genhtml_legend=1 00:15:58.564 --rc geninfo_all_blocks=1 00:15:58.564 --rc geninfo_unexecuted_blocks=1 00:15:58.564 00:15:58.564 ' 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:58.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.564 --rc genhtml_branch_coverage=1 00:15:58.564 --rc genhtml_function_coverage=1 00:15:58.564 --rc genhtml_legend=1 00:15:58.564 --rc geninfo_all_blocks=1 00:15:58.564 --rc geninfo_unexecuted_blocks=1 00:15:58.564 00:15:58.564 ' 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:58.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.564 --rc genhtml_branch_coverage=1 00:15:58.564 --rc genhtml_function_coverage=1 00:15:58.564 --rc genhtml_legend=1 00:15:58.564 --rc geninfo_all_blocks=1 00:15:58.564 --rc geninfo_unexecuted_blocks=1 00:15:58.564 00:15:58.564 ' 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:58.564 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:58.564 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:58.565 Cannot find device "nvmf_init_br" 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:58.565 Cannot find device "nvmf_init_br2" 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:58.565 Cannot find device "nvmf_tgt_br" 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.565 Cannot find device "nvmf_tgt_br2" 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:58.565 Cannot find device "nvmf_init_br" 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:58.565 Cannot find device "nvmf_init_br2" 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:58.565 Cannot find device "nvmf_tgt_br" 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:58.565 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:58.824 Cannot find device "nvmf_tgt_br2" 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:58.824 Cannot find device "nvmf_br" 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:58.824 Cannot find device "nvmf_init_if" 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:58.824 Cannot find device "nvmf_init_if2" 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:58.824 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:59.084 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:59.084 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:15:59.084 00:15:59.084 --- 10.0.0.3 ping statistics --- 00:15:59.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.084 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:59.084 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:59.084 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:59.084 00:15:59.084 --- 10.0.0.4 ping statistics --- 00:15:59.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.084 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:59.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:59.084 00:15:59.084 --- 10.0.0.1 ping statistics --- 00:15:59.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.084 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:59.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:15:59.084 00:15:59.084 --- 10.0.0.2 ping statistics --- 00:15:59.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.084 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=87202 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 87202 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 87202 ']' 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.084 16:51:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.084 [2024-11-29 16:51:22.756147] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:15:59.084 [2024-11-29 16:51:22.756962] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.343 [2024-11-29 16:51:22.883989] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:59.343 [2024-11-29 16:51:22.916774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.343 [2024-11-29 16:51:22.939216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.343 [2024-11-29 16:51:22.939517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.343 [2024-11-29 16:51:22.939714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.343 [2024-11-29 16:51:22.939868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.343 [2024-11-29 16:51:22.939918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.343 [2024-11-29 16:51:22.940388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.343 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.343 [2024-11-29 16:51:23.121542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.603 Malloc0 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.603 [2024-11-29 16:51:23.170596] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:59.603 [2024-11-29 16:51:23.194704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:59.603 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.604 16:51:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:59.864 [2024-11-29 16:51:23.401559] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:01.244 Initializing NVMe Controllers 00:16:01.244 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:01.244 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:16:01.244 Initialization complete. Launching workers. 00:16:01.244 ======================================================== 00:16:01.244 Latency(us) 00:16:01.244 Device Information : IOPS MiB/s Average min max 00:16:01.244 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 504.43 63.05 7930.36 2184.93 11177.42 00:16:01.244 ======================================================== 00:16:01.244 Total : 504.43 63.05 7930.36 2184.93 11177.42 00:16:01.244 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4804 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4804 -eq 0 ]] 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:01.244 rmmod nvme_tcp 00:16:01.244 rmmod nvme_fabrics 00:16:01.244 rmmod nvme_keyring 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 87202 ']' 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 87202 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 87202 ']' 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 87202 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87202 00:16:01.244 killing process with pid 87202 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87202' 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 87202 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 87202 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:16:01.244 16:51:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:16:01.244 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:01.244 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:01.244 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:01.244 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:01.503 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:01.503 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.503 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:16:01.504 00:16:01.504 real 0m3.237s 00:16:01.504 user 0m2.560s 00:16:01.504 sys 0m0.803s 00:16:01.504 ************************************ 00:16:01.504 END TEST nvmf_wait_for_buf 00:16:01.504 ************************************ 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.504 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:01.763 ************************************ 00:16:01.763 START TEST nvmf_fuzz 00:16:01.763 ************************************ 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:16:01.763 * Looking for test storage... 00:16:01.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:01.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.763 --rc genhtml_branch_coverage=1 00:16:01.763 --rc genhtml_function_coverage=1 00:16:01.763 --rc genhtml_legend=1 00:16:01.763 --rc geninfo_all_blocks=1 00:16:01.763 --rc geninfo_unexecuted_blocks=1 00:16:01.763 00:16:01.763 ' 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:01.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.763 --rc genhtml_branch_coverage=1 00:16:01.763 --rc genhtml_function_coverage=1 00:16:01.763 --rc genhtml_legend=1 00:16:01.763 --rc geninfo_all_blocks=1 00:16:01.763 --rc geninfo_unexecuted_blocks=1 00:16:01.763 00:16:01.763 ' 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:01.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.763 --rc genhtml_branch_coverage=1 00:16:01.763 --rc genhtml_function_coverage=1 00:16:01.763 --rc genhtml_legend=1 00:16:01.763 --rc geninfo_all_blocks=1 00:16:01.763 --rc geninfo_unexecuted_blocks=1 00:16:01.763 00:16:01.763 ' 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:01.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.763 --rc genhtml_branch_coverage=1 00:16:01.763 --rc genhtml_function_coverage=1 00:16:01.763 --rc genhtml_legend=1 00:16:01.763 --rc geninfo_all_blocks=1 00:16:01.763 --rc geninfo_unexecuted_blocks=1 00:16:01.763 00:16:01.763 ' 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.763 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:01.764 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.764 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.023 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:02.023 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:02.023 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:02.023 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:02.023 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:02.023 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:02.023 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:02.023 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:02.023 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:02.023 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:02.024 Cannot find device "nvmf_init_br" 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:02.024 Cannot find device "nvmf_init_br2" 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:02.024 Cannot find device "nvmf_tgt_br" 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.024 Cannot find device "nvmf_tgt_br2" 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:02.024 Cannot find device "nvmf_init_br" 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:02.024 Cannot find device "nvmf_init_br2" 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:02.024 Cannot find device "nvmf_tgt_br" 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:02.024 Cannot find device "nvmf_tgt_br2" 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:02.024 Cannot find device "nvmf_br" 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:02.024 Cannot find device "nvmf_init_if" 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:02.024 Cannot find device "nvmf_init_if2" 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:02.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:02.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:02.024 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:02.284 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:02.285 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:02.285 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:16:02.285 00:16:02.285 --- 10.0.0.3 ping statistics --- 00:16:02.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.285 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:02.285 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:02.285 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:16:02.285 00:16:02.285 --- 10.0.0.4 ping statistics --- 00:16:02.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.285 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:02.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:02.285 00:16:02.285 --- 10.0.0.1 ping statistics --- 00:16:02.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.285 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:02.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:16:02.285 00:16:02.285 --- 10.0.0.2 ping statistics --- 00:16:02.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.285 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=87461 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 87461 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 87461 ']' 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.285 16:51:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.544 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.544 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:02.544 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:02.544 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.544 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.544 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.544 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:16:02.544 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.544 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.804 Malloc0 00:16:02.804 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.804 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:02.804 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.804 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.804 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.804 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:02.804 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.804 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.804 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.804 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:02.804 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.804 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.804 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.804 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:16:02.805 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:16:03.064 Shutting down the fuzz application 00:16:03.064 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:16:03.324 Shutting down the fuzz application 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:03.324 rmmod nvme_tcp 00:16:03.324 rmmod nvme_fabrics 00:16:03.324 rmmod nvme_keyring 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 87461 ']' 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 87461 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 87461 ']' 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 87461 00:16:03.324 16:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:03.324 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.324 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87461 00:16:03.324 killing process with pid 87461 00:16:03.324 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.324 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.324 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87461' 00:16:03.324 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 87461 00:16:03.324 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 87461 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:03.583 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:03.843 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:03.843 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:03.843 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.843 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.843 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.843 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:16:03.843 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:16:03.843 00:16:03.843 real 0m2.121s 00:16:03.843 user 0m1.721s 00:16:03.843 sys 0m0.651s 00:16:03.843 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.843 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.843 ************************************ 00:16:03.843 END TEST nvmf_fuzz 00:16:03.843 ************************************ 00:16:03.843 16:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:16:03.843 16:51:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:03.844 16:51:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.844 16:51:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:03.844 ************************************ 00:16:03.844 START TEST nvmf_multiconnection 00:16:03.844 ************************************ 00:16:03.844 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:16:03.844 * Looking for test storage... 00:16:03.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:03.844 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:03.844 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:16:03.844 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:04.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.105 --rc genhtml_branch_coverage=1 00:16:04.105 --rc genhtml_function_coverage=1 00:16:04.105 --rc genhtml_legend=1 00:16:04.105 --rc geninfo_all_blocks=1 00:16:04.105 --rc geninfo_unexecuted_blocks=1 00:16:04.105 00:16:04.105 ' 00:16:04.105 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:04.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.105 --rc genhtml_branch_coverage=1 00:16:04.105 --rc genhtml_function_coverage=1 00:16:04.105 --rc genhtml_legend=1 00:16:04.105 --rc geninfo_all_blocks=1 00:16:04.105 --rc geninfo_unexecuted_blocks=1 00:16:04.105 00:16:04.105 ' 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:04.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.106 --rc genhtml_branch_coverage=1 00:16:04.106 --rc genhtml_function_coverage=1 00:16:04.106 --rc genhtml_legend=1 00:16:04.106 --rc geninfo_all_blocks=1 00:16:04.106 --rc geninfo_unexecuted_blocks=1 00:16:04.106 00:16:04.106 ' 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:04.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.106 --rc genhtml_branch_coverage=1 00:16:04.106 --rc genhtml_function_coverage=1 00:16:04.106 --rc genhtml_legend=1 00:16:04.106 --rc geninfo_all_blocks=1 00:16:04.106 --rc geninfo_unexecuted_blocks=1 00:16:04.106 00:16:04.106 ' 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:04.106 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:04.106 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:04.107 Cannot find device "nvmf_init_br" 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:04.107 Cannot find device "nvmf_init_br2" 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:04.107 Cannot find device "nvmf_tgt_br" 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:04.107 Cannot find device "nvmf_tgt_br2" 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:04.107 Cannot find device "nvmf_init_br" 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:04.107 Cannot find device "nvmf_init_br2" 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:04.107 Cannot find device "nvmf_tgt_br" 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:04.107 Cannot find device "nvmf_tgt_br2" 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:04.107 Cannot find device "nvmf_br" 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:04.107 Cannot find device "nvmf_init_if" 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:04.107 Cannot find device "nvmf_init_if2" 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:04.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:04.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:04.107 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:04.376 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:04.376 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:04.376 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:04.376 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:04.376 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:04.376 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:04.376 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:04.376 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:04.376 16:51:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:04.376 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:04.376 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:16:04.376 00:16:04.376 --- 10.0.0.3 ping statistics --- 00:16:04.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.376 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:04.376 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:04.376 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:16:04.376 00:16:04.376 --- 10.0.0.4 ping statistics --- 00:16:04.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.376 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:04.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:16:04.376 00:16:04.376 --- 10.0.0.1 ping statistics --- 00:16:04.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.376 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:04.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:16:04.376 00:16:04.376 --- 10.0.0.2 ping statistics --- 00:16:04.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.376 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:04.376 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:04.710 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:16:04.710 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:04.710 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:04.710 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.710 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=87697 00:16:04.710 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:04.710 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 87697 00:16:04.710 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 87697 ']' 00:16:04.710 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.710 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.710 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.710 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.710 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.710 [2024-11-29 16:51:28.230272] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:04.710 [2024-11-29 16:51:28.230427] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.710 [2024-11-29 16:51:28.364216] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:04.710 [2024-11-29 16:51:28.392315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:04.710 [2024-11-29 16:51:28.419753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.710 [2024-11-29 16:51:28.419820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.710 [2024-11-29 16:51:28.419835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.710 [2024-11-29 16:51:28.419845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.710 [2024-11-29 16:51:28.419853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.710 [2024-11-29 16:51:28.420808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.710 [2024-11-29 16:51:28.420860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.710 [2024-11-29 16:51:28.420999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.710 [2024-11-29 16:51:28.421007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.710 [2024-11-29 16:51:28.456676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.970 [2024-11-29 16:51:28.548876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.970 Malloc1 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.970 [2024-11-29 16:51:28.619628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.970 Malloc2 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.970 Malloc3 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.970 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.971 Malloc4 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.971 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 Malloc5 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 Malloc6 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 Malloc7 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 Malloc8 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.232 Malloc9 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.232 16:51:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.232 Malloc10 00:16:05.232 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.232 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:16:05.232 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.232 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.232 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.232 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:16:05.232 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.232 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.491 Malloc11 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.491 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:16:05.492 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:05.492 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:05.492 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:16:05.492 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:05.492 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:05.492 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:05.492 16:51:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:08.028 16:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:08.028 16:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:16:08.028 16:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:08.028 16:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:08.028 16:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.028 16:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:08.028 16:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:08.028 16:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:16:08.028 16:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:16:08.028 16:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:08.028 16:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.028 16:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:08.028 16:51:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:09.932 16:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:09.932 16:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:09.932 16:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:16:09.932 16:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:09.932 16:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:09.932 16:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:09.932 16:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:09.932 16:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:16:09.932 16:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:16:09.932 16:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:09.932 16:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:09.932 16:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:09.932 16:51:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:11.834 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:11.834 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:11.834 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:16:11.834 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:11.834 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.834 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:11.834 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:11.834 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:16:12.093 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:16:12.093 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:12.093 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.093 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:12.093 16:51:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:13.995 16:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:13.995 16:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:13.995 16:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:16:13.995 16:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:13.995 16:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:13.995 16:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:13.995 16:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:13.995 16:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:16:14.254 16:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:16:14.254 16:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:14.254 16:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:14.254 16:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:14.254 16:51:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:16.156 16:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:16.156 16:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:16.156 16:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:16:16.156 16:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:16.156 16:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:16.156 16:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:16.156 16:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:16.157 16:51:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:16:16.415 16:51:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:16:16.415 16:51:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:16.415 16:51:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:16.415 16:51:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:16.415 16:51:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:18.319 16:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:18.319 16:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:18.319 16:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:16:18.319 16:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:18.320 16:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.320 16:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:18.320 16:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:18.320 16:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:16:18.578 16:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:16:18.578 16:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:18.578 16:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.578 16:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:18.578 16:51:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:20.481 16:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:20.481 16:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:20.481 16:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:16:20.481 16:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:20.481 16:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.481 16:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:20.481 16:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:20.481 16:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:16:20.741 16:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:16:20.741 16:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:20.741 16:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.741 16:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:20.741 16:51:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:22.641 16:51:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:22.641 16:51:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:22.641 16:51:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:16:22.641 16:51:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:22.641 16:51:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.641 16:51:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:22.641 16:51:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:22.641 16:51:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:16:22.937 16:51:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:16:22.937 16:51:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:22.937 16:51:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.937 16:51:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:22.937 16:51:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:24.857 16:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:24.857 16:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:24.857 16:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:16:24.857 16:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:24.857 16:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:24.857 16:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:24.857 16:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.857 16:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:16:25.115 16:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:16:25.115 16:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:25.115 16:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:25.115 16:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:25.115 16:51:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:27.019 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:27.019 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:27.019 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:16:27.019 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:27.019 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.019 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:27.019 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:27.019 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:16:27.278 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:27.278 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:27.278 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:27.278 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:27.278 16:51:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:29.180 16:51:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:29.180 16:51:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:29.180 16:51:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:16:29.180 16:51:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:29.180 16:51:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.180 16:51:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:29.180 16:51:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:29.180 [global] 00:16:29.180 thread=1 00:16:29.180 invalidate=1 00:16:29.180 rw=read 00:16:29.180 time_based=1 00:16:29.180 runtime=10 00:16:29.180 ioengine=libaio 00:16:29.180 direct=1 00:16:29.180 bs=262144 00:16:29.180 iodepth=64 00:16:29.180 norandommap=1 00:16:29.180 numjobs=1 00:16:29.180 00:16:29.180 [job0] 00:16:29.180 filename=/dev/nvme0n1 00:16:29.180 [job1] 00:16:29.180 filename=/dev/nvme10n1 00:16:29.180 [job2] 00:16:29.180 filename=/dev/nvme1n1 00:16:29.180 [job3] 00:16:29.180 filename=/dev/nvme2n1 00:16:29.180 [job4] 00:16:29.180 filename=/dev/nvme3n1 00:16:29.180 [job5] 00:16:29.180 filename=/dev/nvme4n1 00:16:29.180 [job6] 00:16:29.180 filename=/dev/nvme5n1 00:16:29.180 [job7] 00:16:29.180 filename=/dev/nvme6n1 00:16:29.180 [job8] 00:16:29.180 filename=/dev/nvme7n1 00:16:29.180 [job9] 00:16:29.180 filename=/dev/nvme8n1 00:16:29.438 [job10] 00:16:29.438 filename=/dev/nvme9n1 00:16:29.438 Could not set queue depth (nvme0n1) 00:16:29.438 Could not set queue depth (nvme10n1) 00:16:29.438 Could not set queue depth (nvme1n1) 00:16:29.438 Could not set queue depth (nvme2n1) 00:16:29.438 Could not set queue depth (nvme3n1) 00:16:29.438 Could not set queue depth (nvme4n1) 00:16:29.438 Could not set queue depth (nvme5n1) 00:16:29.438 Could not set queue depth (nvme6n1) 00:16:29.438 Could not set queue depth (nvme7n1) 00:16:29.438 Could not set queue depth (nvme8n1) 00:16:29.438 Could not set queue depth (nvme9n1) 00:16:29.697 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:29.697 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:29.697 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:29.697 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:29.697 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:29.697 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:29.697 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:29.697 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:29.697 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:29.697 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:29.697 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:29.697 fio-3.35 00:16:29.697 Starting 11 threads 00:16:41.906 00:16:41.906 job0: (groupid=0, jobs=1): err= 0: pid=88148: Fri Nov 29 16:52:03 2024 00:16:41.906 read: IOPS=103, BW=26.0MiB/s (27.2MB/s)(263MiB/10134msec) 00:16:41.906 slat (usec): min=22, max=280236, avg=9540.70, stdev=26271.07 00:16:41.906 clat (msec): min=90, max=803, avg=606.26, stdev=139.52 00:16:41.906 lat (msec): min=103, max=803, avg=615.80, stdev=140.42 00:16:41.906 clat percentiles (msec): 00:16:41.906 | 1.00th=[ 118], 5.00th=[ 190], 10.00th=[ 502], 20.00th=[ 550], 00:16:41.906 | 30.00th=[ 584], 40.00th=[ 617], 50.00th=[ 642], 60.00th=[ 659], 00:16:41.906 | 70.00th=[ 684], 80.00th=[ 701], 90.00th=[ 735], 95.00th=[ 743], 00:16:41.906 | 99.00th=[ 768], 99.50th=[ 768], 99.90th=[ 802], 99.95th=[ 802], 00:16:41.906 | 99.99th=[ 802] 00:16:41.906 bw ( KiB/s): min=16896, max=32256, per=4.03%, avg=25295.60, stdev=4938.16, samples=20 00:16:41.906 iops : min= 66, max= 126, avg=98.75, stdev=19.36, samples=20 00:16:41.906 lat (msec) : 100=0.10%, 250=5.89%, 500=3.33%, 750=88.12%, 1000=2.57% 00:16:41.906 cpu : usr=0.04%, sys=0.53%, ctx=204, majf=0, minf=4097 00:16:41.906 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:16:41.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.906 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.906 issued rwts: total=1052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.906 job1: (groupid=0, jobs=1): err= 0: pid=88149: Fri Nov 29 16:52:03 2024 00:16:41.906 read: IOPS=183, BW=45.8MiB/s (48.0MB/s)(463MiB/10106msec) 00:16:41.906 slat (usec): min=21, max=159670, avg=5398.61, stdev=14912.83 00:16:41.906 clat (msec): min=15, max=493, avg=343.65, stdev=69.95 00:16:41.906 lat (msec): min=15, max=493, avg=349.05, stdev=69.47 00:16:41.906 clat percentiles (msec): 00:16:41.906 | 1.00th=[ 125], 5.00th=[ 224], 10.00th=[ 257], 20.00th=[ 292], 00:16:41.906 | 30.00th=[ 317], 40.00th=[ 334], 50.00th=[ 355], 60.00th=[ 368], 00:16:41.906 | 70.00th=[ 384], 80.00th=[ 405], 90.00th=[ 426], 95.00th=[ 439], 00:16:41.906 | 99.00th=[ 464], 99.50th=[ 464], 99.90th=[ 481], 99.95th=[ 493], 00:16:41.906 | 99.99th=[ 493] 00:16:41.906 bw ( KiB/s): min=36864, max=53248, per=7.29%, avg=45763.55, stdev=4212.88, samples=20 00:16:41.906 iops : min= 144, max= 208, avg=178.70, stdev=16.45, samples=20 00:16:41.906 lat (msec) : 20=0.11%, 100=0.27%, 250=8.91%, 500=90.71% 00:16:41.906 cpu : usr=0.12%, sys=0.86%, ctx=343, majf=0, minf=4097 00:16:41.906 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:16:41.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.906 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.906 issued rwts: total=1851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.906 job2: (groupid=0, jobs=1): err= 0: pid=88150: Fri Nov 29 16:52:03 2024 00:16:41.906 read: IOPS=443, BW=111MiB/s (116MB/s)(1116MiB/10058msec) 00:16:41.906 slat (usec): min=20, max=57830, avg=2236.86, stdev=5108.12 00:16:41.906 clat (msec): min=18, max=203, avg=141.76, stdev=17.46 00:16:41.906 lat (msec): min=18, max=204, avg=144.00, stdev=17.59 00:16:41.906 clat percentiles (msec): 00:16:41.906 | 1.00th=[ 78], 5.00th=[ 108], 10.00th=[ 128], 20.00th=[ 136], 00:16:41.906 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 144], 60.00th=[ 146], 00:16:41.906 | 70.00th=[ 148], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:16:41.906 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 197], 99.95th=[ 201], 00:16:41.906 | 99.99th=[ 205] 00:16:41.906 bw ( KiB/s): min=106283, max=117248, per=17.95%, avg=112643.75, stdev=2541.67, samples=20 00:16:41.906 iops : min= 415, max= 458, avg=440.00, stdev= 9.95, samples=20 00:16:41.906 lat (msec) : 20=0.09%, 50=0.02%, 100=3.74%, 250=96.15% 00:16:41.906 cpu : usr=0.27%, sys=1.91%, ctx=943, majf=0, minf=4097 00:16:41.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:41.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.906 issued rwts: total=4465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.906 job3: (groupid=0, jobs=1): err= 0: pid=88151: Fri Nov 29 16:52:03 2024 00:16:41.906 read: IOPS=102, BW=25.5MiB/s (26.8MB/s)(259MiB/10142msec) 00:16:41.906 slat (usec): min=22, max=317231, avg=9682.57, stdev=27036.67 00:16:41.906 clat (msec): min=19, max=825, avg=616.64, stdev=133.74 00:16:41.906 lat (msec): min=20, max=825, avg=626.32, stdev=134.64 00:16:41.906 clat percentiles (msec): 00:16:41.906 | 1.00th=[ 45], 5.00th=[ 388], 10.00th=[ 493], 20.00th=[ 550], 00:16:41.906 | 30.00th=[ 592], 40.00th=[ 617], 50.00th=[ 642], 60.00th=[ 659], 00:16:41.906 | 70.00th=[ 693], 80.00th=[ 718], 90.00th=[ 743], 95.00th=[ 760], 00:16:41.906 | 99.00th=[ 793], 99.50th=[ 810], 99.90th=[ 810], 99.95th=[ 827], 00:16:41.906 | 99.99th=[ 827] 00:16:41.906 bw ( KiB/s): min=12800, max=33346, per=3.96%, avg=24855.10, stdev=5441.19, samples=20 00:16:41.906 iops : min= 50, max= 130, avg=97.00, stdev=21.18, samples=20 00:16:41.906 lat (msec) : 20=0.10%, 50=1.93%, 250=1.45%, 500=7.05%, 750=81.26% 00:16:41.906 lat (msec) : 1000=8.21% 00:16:41.906 cpu : usr=0.09%, sys=0.48%, ctx=206, majf=0, minf=4097 00:16:41.906 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:16:41.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.906 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.906 issued rwts: total=1035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.906 job4: (groupid=0, jobs=1): err= 0: pid=88152: Fri Nov 29 16:52:03 2024 00:16:41.906 read: IOPS=443, BW=111MiB/s (116MB/s)(1116MiB/10057msec) 00:16:41.906 slat (usec): min=19, max=50736, avg=2236.69, stdev=5074.77 00:16:41.906 clat (msec): min=20, max=214, avg=141.82, stdev=19.41 00:16:41.906 lat (msec): min=20, max=214, avg=144.05, stdev=19.53 00:16:41.906 clat percentiles (msec): 00:16:41.906 | 1.00th=[ 59], 5.00th=[ 107], 10.00th=[ 128], 20.00th=[ 136], 00:16:41.906 | 30.00th=[ 138], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 146], 00:16:41.906 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 167], 00:16:41.906 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 201], 99.95th=[ 201], 00:16:41.906 | 99.99th=[ 215] 00:16:41.906 bw ( KiB/s): min=109568, max=119296, per=17.94%, avg=112592.00, stdev=2767.07, samples=20 00:16:41.906 iops : min= 428, max= 466, avg=439.80, stdev=10.81, samples=20 00:16:41.906 lat (msec) : 50=0.81%, 100=3.18%, 250=96.01% 00:16:41.906 cpu : usr=0.30%, sys=1.95%, ctx=927, majf=0, minf=4097 00:16:41.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:41.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.906 issued rwts: total=4462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.906 job5: (groupid=0, jobs=1): err= 0: pid=88153: Fri Nov 29 16:52:03 2024 00:16:41.906 read: IOPS=104, BW=26.0MiB/s (27.3MB/s)(264MiB/10138msec) 00:16:41.906 slat (usec): min=20, max=239927, avg=9489.67, stdev=25358.18 00:16:41.906 clat (msec): min=84, max=823, avg=603.77, stdev=118.62 00:16:41.906 lat (msec): min=85, max=844, avg=613.26, stdev=119.73 00:16:41.906 clat percentiles (msec): 00:16:41.906 | 1.00th=[ 148], 5.00th=[ 275], 10.00th=[ 518], 20.00th=[ 550], 00:16:41.906 | 30.00th=[ 575], 40.00th=[ 600], 50.00th=[ 617], 60.00th=[ 642], 00:16:41.906 | 70.00th=[ 667], 80.00th=[ 693], 90.00th=[ 718], 95.00th=[ 735], 00:16:41.906 | 99.00th=[ 776], 99.50th=[ 793], 99.90th=[ 827], 99.95th=[ 827], 00:16:41.906 | 99.99th=[ 827] 00:16:41.906 bw ( KiB/s): min=16384, max=32256, per=4.05%, avg=25391.95, stdev=4252.51, samples=20 00:16:41.906 iops : min= 64, max= 126, avg=99.10, stdev=16.61, samples=20 00:16:41.906 lat (msec) : 100=0.76%, 250=3.31%, 500=3.31%, 750=89.39%, 1000=3.22% 00:16:41.906 cpu : usr=0.07%, sys=0.47%, ctx=208, majf=0, minf=4097 00:16:41.906 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:16:41.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.906 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.906 issued rwts: total=1056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.906 job6: (groupid=0, jobs=1): err= 0: pid=88154: Fri Nov 29 16:52:03 2024 00:16:41.906 read: IOPS=102, BW=25.5MiB/s (26.8MB/s)(259MiB/10138msec) 00:16:41.906 slat (usec): min=20, max=321196, avg=9651.31, stdev=27660.93 00:16:41.906 clat (msec): min=133, max=824, avg=615.71, stdev=98.67 00:16:41.906 lat (msec): min=194, max=824, avg=625.36, stdev=99.01 00:16:41.906 clat percentiles (msec): 00:16:41.906 | 1.00th=[ 288], 5.00th=[ 388], 10.00th=[ 514], 20.00th=[ 567], 00:16:41.906 | 30.00th=[ 592], 40.00th=[ 609], 50.00th=[ 634], 60.00th=[ 651], 00:16:41.906 | 70.00th=[ 659], 80.00th=[ 684], 90.00th=[ 718], 95.00th=[ 743], 00:16:41.906 | 99.00th=[ 802], 99.50th=[ 802], 99.90th=[ 802], 99.95th=[ 827], 00:16:41.906 | 99.99th=[ 827] 00:16:41.906 bw ( KiB/s): min=15872, max=32702, per=3.97%, avg=24910.65, stdev=4394.03, samples=20 00:16:41.906 iops : min= 62, max= 127, avg=97.25, stdev=17.09, samples=20 00:16:41.906 lat (msec) : 250=0.77%, 500=7.14%, 750=87.55%, 1000=4.54% 00:16:41.906 cpu : usr=0.04%, sys=0.51%, ctx=197, majf=0, minf=4097 00:16:41.906 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:16:41.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.906 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.906 issued rwts: total=1036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.906 job7: (groupid=0, jobs=1): err= 0: pid=88155: Fri Nov 29 16:52:03 2024 00:16:41.906 read: IOPS=301, BW=75.3MiB/s (79.0MB/s)(760MiB/10088msec) 00:16:41.907 slat (usec): min=20, max=112119, avg=3289.97, stdev=7768.20 00:16:41.907 clat (msec): min=10, max=292, avg=208.97, stdev=24.67 00:16:41.907 lat (msec): min=11, max=305, avg=212.26, stdev=25.10 00:16:41.907 clat percentiles (msec): 00:16:41.907 | 1.00th=[ 91], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 199], 00:16:41.907 | 30.00th=[ 205], 40.00th=[ 209], 50.00th=[ 213], 60.00th=[ 215], 00:16:41.907 | 70.00th=[ 220], 80.00th=[ 224], 90.00th=[ 232], 95.00th=[ 236], 00:16:41.907 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 288], 99.95th=[ 288], 00:16:41.907 | 99.99th=[ 292] 00:16:41.907 bw ( KiB/s): min=70656, max=87377, per=12.14%, avg=76196.55, stdev=4054.30, samples=20 00:16:41.907 iops : min= 276, max= 341, avg=297.40, stdev=15.87, samples=20 00:16:41.907 lat (msec) : 20=0.10%, 100=1.12%, 250=98.03%, 500=0.76% 00:16:41.907 cpu : usr=0.17%, sys=1.31%, ctx=635, majf=0, minf=4097 00:16:41.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:16:41.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.907 issued rwts: total=3039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.907 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.907 job8: (groupid=0, jobs=1): err= 0: pid=88157: Fri Nov 29 16:52:03 2024 00:16:41.907 read: IOPS=201, BW=50.4MiB/s (52.8MB/s)(509MiB/10099msec) 00:16:41.907 slat (usec): min=20, max=115277, avg=4923.61, stdev=13027.64 00:16:41.907 clat (msec): min=16, max=415, avg=312.20, stdev=63.17 00:16:41.907 lat (msec): min=16, max=421, avg=317.13, stdev=64.07 00:16:41.907 clat percentiles (msec): 00:16:41.907 | 1.00th=[ 47], 5.00th=[ 211], 10.00th=[ 236], 20.00th=[ 279], 00:16:41.907 | 30.00th=[ 305], 40.00th=[ 317], 50.00th=[ 330], 60.00th=[ 338], 00:16:41.907 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 368], 95.00th=[ 380], 00:16:41.907 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 414], 99.95th=[ 414], 00:16:41.907 | 99.99th=[ 414] 00:16:41.907 bw ( KiB/s): min=40016, max=72192, per=8.05%, avg=50510.05, stdev=7572.39, samples=20 00:16:41.907 iops : min= 156, max= 282, avg=197.15, stdev=29.56, samples=20 00:16:41.907 lat (msec) : 20=0.10%, 50=0.93%, 100=1.38%, 250=12.87%, 500=84.72% 00:16:41.907 cpu : usr=0.12%, sys=0.88%, ctx=399, majf=0, minf=4097 00:16:41.907 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:16:41.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.907 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.907 issued rwts: total=2036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.907 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.907 job9: (groupid=0, jobs=1): err= 0: pid=88161: Fri Nov 29 16:52:03 2024 00:16:41.907 read: IOPS=300, BW=75.1MiB/s (78.7MB/s)(757MiB/10086msec) 00:16:41.907 slat (usec): min=20, max=113626, avg=3305.53, stdev=7754.14 00:16:41.907 clat (msec): min=21, max=305, avg=209.71, stdev=26.07 00:16:41.907 lat (msec): min=22, max=305, avg=213.02, stdev=26.38 00:16:41.907 clat percentiles (msec): 00:16:41.907 | 1.00th=[ 90], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 199], 00:16:41.907 | 30.00th=[ 205], 40.00th=[ 209], 50.00th=[ 211], 60.00th=[ 215], 00:16:41.907 | 70.00th=[ 220], 80.00th=[ 226], 90.00th=[ 232], 95.00th=[ 239], 00:16:41.907 | 99.00th=[ 257], 99.50th=[ 266], 99.90th=[ 271], 99.95th=[ 305], 00:16:41.907 | 99.99th=[ 305] 00:16:41.907 bw ( KiB/s): min=67719, max=81058, per=12.10%, avg=75938.40, stdev=3010.34, samples=20 00:16:41.907 iops : min= 264, max= 316, avg=296.40, stdev=11.80, samples=20 00:16:41.907 lat (msec) : 50=0.92%, 100=0.26%, 250=97.49%, 500=1.32% 00:16:41.907 cpu : usr=0.19%, sys=1.32%, ctx=623, majf=0, minf=4097 00:16:41.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:16:41.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.907 issued rwts: total=3028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.907 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.907 job10: (groupid=0, jobs=1): err= 0: pid=88162: Fri Nov 29 16:52:03 2024 00:16:41.907 read: IOPS=178, BW=44.7MiB/s (46.9MB/s)(451MiB/10097msec) 00:16:41.907 slat (usec): min=20, max=309255, avg=5545.49, stdev=16781.39 00:16:41.907 clat (msec): min=19, max=576, avg=351.90, stdev=58.71 00:16:41.907 lat (msec): min=20, max=576, avg=357.44, stdev=57.98 00:16:41.907 clat percentiles (msec): 00:16:41.907 | 1.00th=[ 205], 5.00th=[ 253], 10.00th=[ 271], 20.00th=[ 305], 00:16:41.907 | 30.00th=[ 326], 40.00th=[ 342], 50.00th=[ 359], 60.00th=[ 368], 00:16:41.907 | 70.00th=[ 384], 80.00th=[ 405], 90.00th=[ 430], 95.00th=[ 443], 00:16:41.907 | 99.00th=[ 456], 99.50th=[ 468], 99.90th=[ 575], 99.95th=[ 575], 00:16:41.907 | 99.99th=[ 575] 00:16:41.907 bw ( KiB/s): min=31807, max=52224, per=7.11%, avg=44600.50, stdev=5643.17, samples=20 00:16:41.907 iops : min= 124, max= 204, avg=174.05, stdev=22.11, samples=20 00:16:41.907 lat (msec) : 20=0.06%, 100=0.06%, 250=4.49%, 500=95.29%, 750=0.11% 00:16:41.907 cpu : usr=0.06%, sys=0.81%, ctx=322, majf=0, minf=4097 00:16:41.907 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:41.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.907 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.907 issued rwts: total=1805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.907 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.907 00:16:41.907 Run status group 0 (all jobs): 00:16:41.907 READ: bw=613MiB/s (643MB/s), 25.5MiB/s-111MiB/s (26.8MB/s-116MB/s), io=6216MiB (6518MB), run=10057-10142msec 00:16:41.907 00:16:41.907 Disk stats (read/write): 00:16:41.907 nvme0n1: ios=1983/0, merge=0/0, ticks=1205145/0, in_queue=1205145, util=97.76% 00:16:41.907 nvme10n1: ios=3592/0, merge=0/0, ticks=1223834/0, in_queue=1223834, util=98.01% 00:16:41.907 nvme1n1: ios=8834/0, merge=0/0, ticks=1239566/0, in_queue=1239566, util=98.16% 00:16:41.907 nvme2n1: ios=1949/0, merge=0/0, ticks=1209960/0, in_queue=1209960, util=98.25% 00:16:41.907 nvme3n1: ios=8810/0, merge=0/0, ticks=1236269/0, in_queue=1236269, util=98.31% 00:16:41.907 nvme4n1: ios=1988/0, merge=0/0, ticks=1201292/0, in_queue=1201292, util=98.42% 00:16:41.907 nvme5n1: ios=1954/0, merge=0/0, ticks=1205361/0, in_queue=1205361, util=98.61% 00:16:41.907 nvme6n1: ios=5951/0, merge=0/0, ticks=1231981/0, in_queue=1231981, util=98.66% 00:16:41.907 nvme7n1: ios=3947/0, merge=0/0, ticks=1228009/0, in_queue=1228009, util=98.90% 00:16:41.907 nvme8n1: ios=5928/0, merge=0/0, ticks=1232951/0, in_queue=1232951, util=99.01% 00:16:41.907 nvme9n1: ios=3486/0, merge=0/0, ticks=1220468/0, in_queue=1220468, util=99.06% 00:16:41.907 16:52:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:16:41.907 [global] 00:16:41.907 thread=1 00:16:41.907 invalidate=1 00:16:41.907 rw=randwrite 00:16:41.907 time_based=1 00:16:41.907 runtime=10 00:16:41.907 ioengine=libaio 00:16:41.907 direct=1 00:16:41.907 bs=262144 00:16:41.907 iodepth=64 00:16:41.907 norandommap=1 00:16:41.907 numjobs=1 00:16:41.907 00:16:41.907 [job0] 00:16:41.907 filename=/dev/nvme0n1 00:16:41.907 [job1] 00:16:41.907 filename=/dev/nvme10n1 00:16:41.907 [job2] 00:16:41.907 filename=/dev/nvme1n1 00:16:41.907 [job3] 00:16:41.907 filename=/dev/nvme2n1 00:16:41.907 [job4] 00:16:41.907 filename=/dev/nvme3n1 00:16:41.907 [job5] 00:16:41.907 filename=/dev/nvme4n1 00:16:41.907 [job6] 00:16:41.907 filename=/dev/nvme5n1 00:16:41.907 [job7] 00:16:41.907 filename=/dev/nvme6n1 00:16:41.907 [job8] 00:16:41.907 filename=/dev/nvme7n1 00:16:41.907 [job9] 00:16:41.907 filename=/dev/nvme8n1 00:16:41.907 [job10] 00:16:41.907 filename=/dev/nvme9n1 00:16:41.907 Could not set queue depth (nvme0n1) 00:16:41.907 Could not set queue depth (nvme10n1) 00:16:41.907 Could not set queue depth (nvme1n1) 00:16:41.907 Could not set queue depth (nvme2n1) 00:16:41.907 Could not set queue depth (nvme3n1) 00:16:41.907 Could not set queue depth (nvme4n1) 00:16:41.907 Could not set queue depth (nvme5n1) 00:16:41.907 Could not set queue depth (nvme6n1) 00:16:41.907 Could not set queue depth (nvme7n1) 00:16:41.907 Could not set queue depth (nvme8n1) 00:16:41.907 Could not set queue depth (nvme9n1) 00:16:41.907 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:41.907 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:41.907 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:41.907 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:41.907 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:41.907 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:41.907 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:41.907 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:41.907 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:41.907 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:41.907 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:41.907 fio-3.35 00:16:41.907 Starting 11 threads 00:16:51.892 00:16:51.892 job0: (groupid=0, jobs=1): err= 0: pid=88358: Fri Nov 29 16:52:14 2024 00:16:51.892 write: IOPS=388, BW=97.2MiB/s (102MB/s)(984MiB/10121msec); 0 zone resets 00:16:51.892 slat (usec): min=16, max=227467, avg=2487.15, stdev=5592.53 00:16:51.892 clat (msec): min=111, max=467, avg=162.07, stdev=33.92 00:16:51.892 lat (msec): min=121, max=467, avg=164.56, stdev=33.99 00:16:51.892 clat percentiles (msec): 00:16:51.892 | 1.00th=[ 144], 5.00th=[ 146], 10.00th=[ 146], 20.00th=[ 148], 00:16:51.892 | 30.00th=[ 155], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 157], 00:16:51.892 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 247], 00:16:51.892 | 99.00th=[ 300], 99.50th=[ 372], 99.90th=[ 439], 99.95th=[ 468], 00:16:51.892 | 99.99th=[ 468] 00:16:51.892 bw ( KiB/s): min=34816, max=106496, per=12.93%, avg=99123.20, stdev=18143.35, samples=20 00:16:51.892 iops : min= 136, max= 416, avg=387.20, stdev=70.87, samples=20 00:16:51.892 lat (msec) : 250=95.20%, 500=4.80% 00:16:51.892 cpu : usr=0.60%, sys=0.94%, ctx=5355, majf=0, minf=1 00:16:51.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:51.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:51.892 issued rwts: total=0,3935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.892 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:51.892 job1: (groupid=0, jobs=1): err= 0: pid=88359: Fri Nov 29 16:52:14 2024 00:16:51.892 write: IOPS=391, BW=98.0MiB/s (103MB/s)(992MiB/10123msec); 0 zone resets 00:16:51.892 slat (usec): min=17, max=91746, avg=2513.94, stdev=4655.11 00:16:51.892 clat (msec): min=94, max=306, avg=160.73, stdev=27.16 00:16:51.892 lat (msec): min=94, max=306, avg=163.25, stdev=27.20 00:16:51.892 clat percentiles (msec): 00:16:51.892 | 1.00th=[ 144], 5.00th=[ 146], 10.00th=[ 146], 20.00th=[ 148], 00:16:51.892 | 30.00th=[ 155], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 157], 00:16:51.892 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 236], 00:16:51.892 | 99.00th=[ 284], 99.50th=[ 296], 99.90th=[ 305], 99.95th=[ 309], 00:16:51.892 | 99.99th=[ 309] 00:16:51.892 bw ( KiB/s): min=52328, max=108544, per=13.04%, avg=99947.60, stdev=14815.62, samples=20 00:16:51.892 iops : min= 204, max= 424, avg=390.40, stdev=57.94, samples=20 00:16:51.892 lat (msec) : 100=0.05%, 250=95.79%, 500=4.16% 00:16:51.892 cpu : usr=0.78%, sys=1.17%, ctx=5907, majf=0, minf=1 00:16:51.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:51.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:51.892 issued rwts: total=0,3967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.892 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:51.892 job2: (groupid=0, jobs=1): err= 0: pid=88371: Fri Nov 29 16:52:14 2024 00:16:51.892 write: IOPS=132, BW=33.2MiB/s (34.8MB/s)(341MiB/10266msec); 0 zone resets 00:16:51.892 slat (usec): min=16, max=133002, avg=7337.53, stdev=14271.91 00:16:51.892 clat (msec): min=128, max=734, avg=474.08, stdev=67.79 00:16:51.892 lat (msec): min=128, max=734, avg=481.42, stdev=67.64 00:16:51.892 clat percentiles (msec): 00:16:51.892 | 1.00th=[ 188], 5.00th=[ 342], 10.00th=[ 401], 20.00th=[ 456], 00:16:51.892 | 30.00th=[ 477], 40.00th=[ 481], 50.00th=[ 489], 60.00th=[ 498], 00:16:51.892 | 70.00th=[ 506], 80.00th=[ 510], 90.00th=[ 518], 95.00th=[ 527], 00:16:51.892 | 99.00th=[ 642], 99.50th=[ 676], 99.90th=[ 735], 99.95th=[ 735], 00:16:51.892 | 99.99th=[ 735] 00:16:51.892 bw ( KiB/s): min=30720, max=36864, per=4.34%, avg=33283.45, stdev=1867.69, samples=20 00:16:51.892 iops : min= 120, max= 144, avg=130.00, stdev= 7.28, samples=20 00:16:51.892 lat (msec) : 250=2.42%, 500=59.02%, 750=38.56% 00:16:51.892 cpu : usr=0.24%, sys=0.43%, ctx=1467, majf=0, minf=1 00:16:51.892 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:16:51.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.892 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:51.892 issued rwts: total=0,1364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.892 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:51.892 job3: (groupid=0, jobs=1): err= 0: pid=88377: Fri Nov 29 16:52:14 2024 00:16:51.892 write: IOPS=392, BW=98.1MiB/s (103MB/s)(994MiB/10130msec); 0 zone resets 00:16:51.892 slat (usec): min=17, max=117415, avg=2510.55, stdev=4712.07 00:16:51.892 clat (msec): min=20, max=355, avg=160.48, stdev=32.16 00:16:51.892 lat (msec): min=20, max=355, avg=162.99, stdev=32.30 00:16:51.892 clat percentiles (msec): 00:16:51.892 | 1.00th=[ 130], 5.00th=[ 144], 10.00th=[ 146], 20.00th=[ 148], 00:16:51.892 | 30.00th=[ 155], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 157], 00:16:51.892 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 239], 00:16:51.892 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 351], 99.95th=[ 355], 00:16:51.892 | 99.99th=[ 355] 00:16:51.892 bw ( KiB/s): min=55406, max=106496, per=13.07%, avg=100178.30, stdev=14374.89, samples=20 00:16:51.892 iops : min= 216, max= 416, avg=391.30, stdev=56.22, samples=20 00:16:51.892 lat (msec) : 50=0.40%, 100=0.40%, 250=94.69%, 500=4.50% 00:16:51.892 cpu : usr=0.72%, sys=1.21%, ctx=3670, majf=0, minf=1 00:16:51.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:51.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:51.893 issued rwts: total=0,3976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:51.893 job4: (groupid=0, jobs=1): err= 0: pid=88378: Fri Nov 29 16:52:14 2024 00:16:51.893 write: IOPS=137, BW=34.4MiB/s (36.0MB/s)(353MiB/10271msec); 0 zone resets 00:16:51.893 slat (usec): min=19, max=92262, avg=7080.29, stdev=13056.35 00:16:51.893 clat (msec): min=38, max=750, avg=458.21, stdev=80.96 00:16:51.893 lat (msec): min=38, max=750, avg=465.29, stdev=81.38 00:16:51.893 clat percentiles (msec): 00:16:51.893 | 1.00th=[ 106], 5.00th=[ 296], 10.00th=[ 368], 20.00th=[ 447], 00:16:51.893 | 30.00th=[ 460], 40.00th=[ 472], 50.00th=[ 481], 60.00th=[ 481], 00:16:51.893 | 70.00th=[ 485], 80.00th=[ 498], 90.00th=[ 514], 95.00th=[ 523], 00:16:51.893 | 99.00th=[ 625], 99.50th=[ 693], 99.90th=[ 751], 99.95th=[ 751], 00:16:51.893 | 99.99th=[ 751] 00:16:51.893 bw ( KiB/s): min=30720, max=49152, per=4.50%, avg=34531.05, stdev=3880.85, samples=20 00:16:51.893 iops : min= 120, max= 192, avg=134.85, stdev=15.17, samples=20 00:16:51.893 lat (msec) : 50=0.28%, 100=0.57%, 250=2.27%, 500=79.53%, 750=17.21% 00:16:51.893 lat (msec) : 1000=0.14% 00:16:51.893 cpu : usr=0.25%, sys=0.36%, ctx=1594, majf=0, minf=1 00:16:51.893 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:16:51.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.893 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:51.893 issued rwts: total=0,1412,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:51.893 job5: (groupid=0, jobs=1): err= 0: pid=88380: Fri Nov 29 16:52:14 2024 00:16:51.893 write: IOPS=137, BW=34.4MiB/s (36.1MB/s)(354MiB/10274msec); 0 zone resets 00:16:51.893 slat (usec): min=19, max=104330, avg=7065.06, stdev=13074.86 00:16:51.893 clat (msec): min=27, max=753, avg=457.35, stdev=79.20 00:16:51.893 lat (msec): min=27, max=753, avg=464.42, stdev=79.57 00:16:51.893 clat percentiles (msec): 00:16:51.893 | 1.00th=[ 96], 5.00th=[ 300], 10.00th=[ 397], 20.00th=[ 447], 00:16:51.893 | 30.00th=[ 460], 40.00th=[ 477], 50.00th=[ 477], 60.00th=[ 481], 00:16:51.893 | 70.00th=[ 485], 80.00th=[ 489], 90.00th=[ 506], 95.00th=[ 514], 00:16:51.893 | 99.00th=[ 634], 99.50th=[ 693], 99.90th=[ 751], 99.95th=[ 751], 00:16:51.893 | 99.99th=[ 751] 00:16:51.893 bw ( KiB/s): min=32702, max=48640, per=4.52%, avg=34619.85, stdev=3526.57, samples=20 00:16:51.893 iops : min= 127, max= 190, avg=135.05, stdev=13.80, samples=20 00:16:51.893 lat (msec) : 50=0.21%, 100=0.85%, 250=2.26%, 500=84.10%, 750=12.44% 00:16:51.893 lat (msec) : 1000=0.14% 00:16:51.893 cpu : usr=0.26%, sys=0.42%, ctx=1155, majf=0, minf=1 00:16:51.893 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:16:51.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.893 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:51.893 issued rwts: total=0,1415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:51.893 job6: (groupid=0, jobs=1): err= 0: pid=88381: Fri Nov 29 16:52:14 2024 00:16:51.893 write: IOPS=841, BW=210MiB/s (220MB/s)(2116MiB/10063msec); 0 zone resets 00:16:51.893 slat (usec): min=17, max=58706, avg=1151.11, stdev=2275.11 00:16:51.893 clat (msec): min=11, max=298, avg=74.92, stdev=29.46 00:16:51.893 lat (msec): min=11, max=298, avg=76.07, stdev=29.80 00:16:51.893 clat percentiles (msec): 00:16:51.893 | 1.00th=[ 25], 5.00th=[ 67], 10.00th=[ 68], 20.00th=[ 69], 00:16:51.893 | 30.00th=[ 70], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 72], 00:16:51.893 | 70.00th=[ 73], 80.00th=[ 73], 90.00th=[ 74], 95.00th=[ 75], 00:16:51.893 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 271], 99.95th=[ 284], 00:16:51.893 | 99.99th=[ 300] 00:16:51.893 bw ( KiB/s): min=57344, max=230400, per=28.05%, avg=215040.00, stdev=40502.37, samples=20 00:16:51.893 iops : min= 224, max= 900, avg=840.00, stdev=158.21, samples=20 00:16:51.893 lat (msec) : 20=0.67%, 50=1.45%, 100=94.84%, 250=1.56%, 500=1.48% 00:16:51.893 cpu : usr=1.42%, sys=2.05%, ctx=9812, majf=0, minf=1 00:16:51.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:51.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:51.893 issued rwts: total=0,8463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:51.893 job7: (groupid=0, jobs=1): err= 0: pid=88382: Fri Nov 29 16:52:14 2024 00:16:51.893 write: IOPS=200, BW=50.2MiB/s (52.6MB/s)(515MiB/10267msec); 0 zone resets 00:16:51.893 slat (usec): min=18, max=87105, avg=4833.09, stdev=10742.24 00:16:51.893 clat (msec): min=8, max=727, avg=313.97, stdev=198.12 00:16:51.893 lat (msec): min=8, max=727, avg=318.80, stdev=200.95 00:16:51.893 clat percentiles (msec): 00:16:51.893 | 1.00th=[ 30], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 64], 00:16:51.893 | 30.00th=[ 72], 40.00th=[ 338], 50.00th=[ 451], 60.00th=[ 468], 00:16:51.893 | 70.00th=[ 477], 80.00th=[ 481], 90.00th=[ 485], 95.00th=[ 489], 00:16:51.893 | 99.00th=[ 575], 99.50th=[ 634], 99.90th=[ 693], 99.95th=[ 726], 00:16:51.893 | 99.99th=[ 726] 00:16:51.893 bw ( KiB/s): min=32768, max=223166, per=6.67%, avg=51119.90, stdev=52051.19, samples=20 00:16:51.893 iops : min= 128, max= 871, avg=199.65, stdev=203.20, samples=20 00:16:51.893 lat (msec) : 10=0.19%, 20=0.44%, 50=0.97%, 100=36.12%, 250=0.97% 00:16:51.893 lat (msec) : 500=59.56%, 750=1.75% 00:16:51.893 cpu : usr=0.35%, sys=0.61%, ctx=1818, majf=0, minf=2 00:16:51.893 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:16:51.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.893 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:51.893 issued rwts: total=0,2060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:51.893 job8: (groupid=0, jobs=1): err= 0: pid=88383: Fri Nov 29 16:52:14 2024 00:16:51.893 write: IOPS=134, BW=33.6MiB/s (35.2MB/s)(345MiB/10272msec); 0 zone resets 00:16:51.893 slat (usec): min=21, max=97265, avg=7253.05, stdev=13603.70 00:16:51.893 clat (msec): min=34, max=740, avg=468.88, stdev=80.32 00:16:51.893 lat (msec): min=34, max=740, avg=476.13, stdev=80.65 00:16:51.893 clat percentiles (msec): 00:16:51.893 | 1.00th=[ 102], 5.00th=[ 317], 10.00th=[ 384], 20.00th=[ 451], 00:16:51.893 | 30.00th=[ 472], 40.00th=[ 481], 50.00th=[ 485], 60.00th=[ 493], 00:16:51.893 | 70.00th=[ 506], 80.00th=[ 510], 90.00th=[ 523], 95.00th=[ 531], 00:16:51.893 | 99.00th=[ 651], 99.50th=[ 676], 99.90th=[ 743], 99.95th=[ 743], 00:16:51.893 | 99.99th=[ 743] 00:16:51.893 bw ( KiB/s): min=30720, max=43008, per=4.39%, avg=33689.60, stdev=2932.29, samples=20 00:16:51.893 iops : min= 120, max= 168, avg=131.60, stdev=11.45, samples=20 00:16:51.893 lat (msec) : 50=0.29%, 100=0.58%, 250=2.61%, 500=59.64%, 750=36.88% 00:16:51.893 cpu : usr=0.20%, sys=0.37%, ctx=1644, majf=0, minf=1 00:16:51.893 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:16:51.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.893 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:51.893 issued rwts: total=0,1380,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:51.893 job9: (groupid=0, jobs=1): err= 0: pid=88384: Fri Nov 29 16:52:14 2024 00:16:51.893 write: IOPS=140, BW=35.1MiB/s (36.8MB/s)(360MiB/10265msec); 0 zone resets 00:16:51.893 slat (usec): min=17, max=79571, avg=6947.34, stdev=12605.41 00:16:51.893 clat (msec): min=38, max=737, avg=449.04, stdev=73.04 00:16:51.893 lat (msec): min=38, max=737, avg=455.98, stdev=73.32 00:16:51.893 clat percentiles (msec): 00:16:51.893 | 1.00th=[ 104], 5.00th=[ 300], 10.00th=[ 384], 20.00th=[ 443], 00:16:51.893 | 30.00th=[ 451], 40.00th=[ 460], 50.00th=[ 472], 60.00th=[ 472], 00:16:51.893 | 70.00th=[ 477], 80.00th=[ 481], 90.00th=[ 485], 95.00th=[ 485], 00:16:51.893 | 99.00th=[ 617], 99.50th=[ 676], 99.90th=[ 735], 99.95th=[ 735], 00:16:51.893 | 99.99th=[ 735] 00:16:51.893 bw ( KiB/s): min=32768, max=43094, per=4.59%, avg=35229.90, stdev=2373.56, samples=20 00:16:51.893 iops : min= 128, max= 168, avg=137.60, stdev= 9.21, samples=20 00:16:51.893 lat (msec) : 50=0.28%, 100=0.56%, 250=2.50%, 500=94.58%, 750=2.08% 00:16:51.893 cpu : usr=0.28%, sys=0.38%, ctx=1579, majf=0, minf=1 00:16:51.893 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:16:51.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.893 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:51.893 issued rwts: total=0,1440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:51.893 job10: (groupid=0, jobs=1): err= 0: pid=88385: Fri Nov 29 16:52:14 2024 00:16:51.893 write: IOPS=132, BW=33.1MiB/s (34.7MB/s)(340MiB/10266msec); 0 zone resets 00:16:51.893 slat (usec): min=19, max=188327, avg=7364.11, stdev=14262.44 00:16:51.893 clat (msec): min=191, max=757, avg=475.47, stdev=57.86 00:16:51.893 lat (msec): min=191, max=757, avg=482.83, stdev=57.37 00:16:51.893 clat percentiles (msec): 00:16:51.893 | 1.00th=[ 234], 5.00th=[ 368], 10.00th=[ 426], 20.00th=[ 456], 00:16:51.893 | 30.00th=[ 472], 40.00th=[ 481], 50.00th=[ 485], 60.00th=[ 489], 00:16:51.893 | 70.00th=[ 502], 80.00th=[ 510], 90.00th=[ 514], 95.00th=[ 518], 00:16:51.893 | 99.00th=[ 667], 99.50th=[ 701], 99.90th=[ 760], 99.95th=[ 760], 00:16:51.893 | 99.99th=[ 760] 00:16:51.893 bw ( KiB/s): min=30720, max=36864, per=4.33%, avg=33177.60, stdev=1707.03, samples=20 00:16:51.893 iops : min= 120, max= 144, avg=129.60, stdev= 6.67, samples=20 00:16:51.893 lat (msec) : 250=1.25%, 500=68.31%, 750=30.29%, 1000=0.15% 00:16:51.893 cpu : usr=0.27%, sys=0.37%, ctx=1027, majf=0, minf=1 00:16:51.893 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.4% 00:16:51.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.893 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:51.893 issued rwts: total=0,1360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.893 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:51.893 00:16:51.893 Run status group 0 (all jobs): 00:16:51.893 WRITE: bw=749MiB/s (785MB/s), 33.1MiB/s-210MiB/s (34.7MB/s-220MB/s), io=7693MiB (8067MB), run=10063-10274msec 00:16:51.893 00:16:51.894 Disk stats (read/write): 00:16:51.894 nvme0n1: ios=50/7725, merge=0/0, ticks=48/1213078, in_queue=1213126, util=97.62% 00:16:51.894 nvme10n1: ios=49/7801, merge=0/0, ticks=52/1213218, in_queue=1213270, util=97.92% 00:16:51.894 nvme1n1: ios=39/2707, merge=0/0, ticks=45/1236823, in_queue=1236868, util=98.02% 00:16:51.894 nvme2n1: ios=29/7821, merge=0/0, ticks=40/1212550, in_queue=1212590, util=98.12% 00:16:51.894 nvme3n1: ios=0/2809, merge=0/0, ticks=0/1238539, in_queue=1238539, util=98.10% 00:16:51.894 nvme4n1: ios=0/2816, merge=0/0, ticks=0/1239091, in_queue=1239091, util=98.36% 00:16:51.894 nvme5n1: ios=0/16776, merge=0/0, ticks=0/1217300, in_queue=1217300, util=98.30% 00:16:51.894 nvme6n1: ios=0/4097, merge=0/0, ticks=0/1237705, in_queue=1237705, util=98.39% 00:16:51.894 nvme7n1: ios=0/2741, merge=0/0, ticks=0/1237713, in_queue=1237713, util=98.75% 00:16:51.894 nvme8n1: ios=0/2860, merge=0/0, ticks=0/1237731, in_queue=1237731, util=98.76% 00:16:51.894 nvme9n1: ios=0/2697, merge=0/0, ticks=0/1237077, in_queue=1237077, util=98.85% 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:51.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:51.894 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:51.894 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:51.894 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:51.894 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:51.894 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:16:51.894 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:16:51.894 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:16:51.894 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:16:51.895 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:16:51.895 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:51.895 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:16:52.154 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:16:52.154 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:52.154 rmmod nvme_tcp 00:16:52.154 rmmod nvme_fabrics 00:16:52.154 rmmod nvme_keyring 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 87697 ']' 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 87697 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 87697 ']' 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 87697 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87697 00:16:52.154 killing process with pid 87697 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87697' 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 87697 00:16:52.154 16:52:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 87697 00:16:52.413 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:52.413 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:52.413 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:52.413 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:16:52.413 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:16:52.413 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:52.413 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:16:52.413 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:52.413 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:52.413 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:52.413 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:16:52.672 00:16:52.672 real 0m48.929s 00:16:52.672 user 2m48.480s 00:16:52.672 sys 0m24.764s 00:16:52.672 ************************************ 00:16:52.672 END TEST nvmf_multiconnection 00:16:52.672 ************************************ 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.672 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:52.931 ************************************ 00:16:52.931 START TEST nvmf_initiator_timeout 00:16:52.931 ************************************ 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:52.931 * Looking for test storage... 00:16:52.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.931 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:52.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.931 --rc genhtml_branch_coverage=1 00:16:52.931 --rc genhtml_function_coverage=1 00:16:52.932 --rc genhtml_legend=1 00:16:52.932 --rc geninfo_all_blocks=1 00:16:52.932 --rc geninfo_unexecuted_blocks=1 00:16:52.932 00:16:52.932 ' 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:52.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.932 --rc genhtml_branch_coverage=1 00:16:52.932 --rc genhtml_function_coverage=1 00:16:52.932 --rc genhtml_legend=1 00:16:52.932 --rc geninfo_all_blocks=1 00:16:52.932 --rc geninfo_unexecuted_blocks=1 00:16:52.932 00:16:52.932 ' 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:52.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.932 --rc genhtml_branch_coverage=1 00:16:52.932 --rc genhtml_function_coverage=1 00:16:52.932 --rc genhtml_legend=1 00:16:52.932 --rc geninfo_all_blocks=1 00:16:52.932 --rc geninfo_unexecuted_blocks=1 00:16:52.932 00:16:52.932 ' 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:52.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.932 --rc genhtml_branch_coverage=1 00:16:52.932 --rc genhtml_function_coverage=1 00:16:52.932 --rc genhtml_legend=1 00:16:52.932 --rc geninfo_all_blocks=1 00:16:52.932 --rc geninfo_unexecuted_blocks=1 00:16:52.932 00:16:52.932 ' 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:52.932 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:52.932 Cannot find device "nvmf_init_br" 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:16:52.932 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:53.192 Cannot find device "nvmf_init_br2" 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:53.192 Cannot find device "nvmf_tgt_br" 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:53.192 Cannot find device "nvmf_tgt_br2" 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:53.192 Cannot find device "nvmf_init_br" 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:53.192 Cannot find device "nvmf_init_br2" 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:53.192 Cannot find device "nvmf_tgt_br" 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:53.192 Cannot find device "nvmf_tgt_br2" 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:53.192 Cannot find device "nvmf_br" 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:53.192 Cannot find device "nvmf_init_if" 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:53.192 Cannot find device "nvmf_init_if2" 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:53.192 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:53.192 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:53.192 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:53.452 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:53.452 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:53.452 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:16:53.452 00:16:53.452 --- 10.0.0.3 ping statistics --- 00:16:53.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.452 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:53.452 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:53.452 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:16:53.452 00:16:53.452 --- 10.0.0.4 ping statistics --- 00:16:53.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.452 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:53.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:53.452 00:16:53.452 --- 10.0.0.1 ping statistics --- 00:16:53.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.452 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:53.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:16:53.452 00:16:53.452 --- 10.0.0.2 ping statistics --- 00:16:53.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.452 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=88807 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 88807 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 88807 ']' 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.452 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.453 [2024-11-29 16:52:17.145070] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:16:53.453 [2024-11-29 16:52:17.145356] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.712 [2024-11-29 16:52:17.273153] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:53.712 [2024-11-29 16:52:17.295860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:53.712 [2024-11-29 16:52:17.316321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.712 [2024-11-29 16:52:17.316643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.712 [2024-11-29 16:52:17.316795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.712 [2024-11-29 16:52:17.316808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.712 [2024-11-29 16:52:17.316816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.712 [2024-11-29 16:52:17.317727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.712 [2024-11-29 16:52:17.317895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.712 [2024-11-29 16:52:17.318350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.712 [2024-11-29 16:52:17.318376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.712 [2024-11-29 16:52:17.350152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.712 Malloc0 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.712 Delay0 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.712 [2024-11-29 16:52:17.491208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.712 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.970 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.970 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:53.971 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.971 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.971 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.971 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:53.971 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.971 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.971 [2024-11-29 16:52:17.519392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:53.971 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.971 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:53.971 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:16:53.971 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:16:53.971 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:53.971 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:53.971 16:52:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:16:55.873 16:52:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:55.873 16:52:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:55.873 16:52:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:56.132 16:52:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:56.132 16:52:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:56.132 16:52:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:16:56.132 16:52:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=88864 00:16:56.132 16:52:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:16:56.132 16:52:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:16:56.132 [global] 00:16:56.132 thread=1 00:16:56.132 invalidate=1 00:16:56.132 rw=write 00:16:56.132 time_based=1 00:16:56.132 runtime=60 00:16:56.132 ioengine=libaio 00:16:56.132 direct=1 00:16:56.132 bs=4096 00:16:56.132 iodepth=1 00:16:56.132 norandommap=0 00:16:56.132 numjobs=1 00:16:56.132 00:16:56.132 verify_dump=1 00:16:56.132 verify_backlog=512 00:16:56.132 verify_state_save=0 00:16:56.132 do_verify=1 00:16:56.132 verify=crc32c-intel 00:16:56.132 [job0] 00:16:56.132 filename=/dev/nvme0n1 00:16:56.132 Could not set queue depth (nvme0n1) 00:16:56.132 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.132 fio-3.35 00:16:56.132 Starting 1 thread 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:59.414 true 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:59.414 true 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:59.414 true 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:59.414 true 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.414 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:17:01.946 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:17:01.946 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.946 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:01.946 true 00:17:01.946 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.946 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:17:01.946 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.946 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:01.946 true 00:17:01.946 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.946 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:17:01.946 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.946 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:02.205 true 00:17:02.205 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.205 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:17:02.205 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.205 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:02.205 true 00:17:02.205 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.205 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:17:02.205 16:52:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 88864 00:17:58.441 00:17:58.441 job0: (groupid=0, jobs=1): err= 0: pid=88885: Fri Nov 29 16:53:19 2024 00:17:58.441 read: IOPS=819, BW=3277KiB/s (3355kB/s)(192MiB/60000msec) 00:17:58.441 slat (usec): min=9, max=112, avg=12.75, stdev= 4.03 00:17:58.441 clat (usec): min=155, max=2938, avg=199.75, stdev=29.53 00:17:58.441 lat (usec): min=166, max=2960, avg=212.50, stdev=30.42 00:17:58.441 clat percentiles (usec): 00:17:58.441 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:17:58.441 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:17:58.441 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 241], 00:17:58.441 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 355], 99.95th=[ 404], 00:17:58.441 | 99.99th=[ 1106] 00:17:58.441 write: IOPS=822, BW=3290KiB/s (3369kB/s)(193MiB/60000msec); 0 zone resets 00:17:58.441 slat (usec): min=12, max=14390, avg=20.36, stdev=76.23 00:17:58.441 clat (usec): min=105, max=40779k, avg=980.68, stdev=183557.81 00:17:58.441 lat (usec): min=133, max=40779k, avg=1001.05, stdev=183557.82 00:17:58.441 clat percentiles (usec): 00:17:58.441 | 1.00th=[ 122], 5.00th=[ 126], 10.00th=[ 130], 20.00th=[ 137], 00:17:58.441 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 157], 00:17:58.441 | 70.00th=[ 163], 80.00th=[ 172], 90.00th=[ 184], 95.00th=[ 194], 00:17:58.441 | 99.00th=[ 219], 99.50th=[ 231], 99.90th=[ 265], 99.95th=[ 289], 00:17:58.441 | 99.99th=[ 449] 00:17:58.441 bw ( KiB/s): min= 48, max=12288, per=100.00%, avg=9870.49, stdev=2250.07, samples=39 00:17:58.441 iops : min= 12, max= 3072, avg=2467.62, stdev=562.52, samples=39 00:17:58.441 lat (usec) : 250=98.49%, 500=1.49%, 750=0.01%, 1000=0.01% 00:17:58.441 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:17:58.441 cpu : usr=0.56%, sys=2.15%, ctx=98514, majf=0, minf=5 00:17:58.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:58.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.441 issued rwts: total=49152,49355,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:58.441 00:17:58.441 Run status group 0 (all jobs): 00:17:58.441 READ: bw=3277KiB/s (3355kB/s), 3277KiB/s-3277KiB/s (3355kB/s-3355kB/s), io=192MiB (201MB), run=60000-60000msec 00:17:58.441 WRITE: bw=3290KiB/s (3369kB/s), 3290KiB/s-3290KiB/s (3369kB/s-3369kB/s), io=193MiB (202MB), run=60000-60000msec 00:17:58.441 00:17:58.441 Disk stats (read/write): 00:17:58.441 nvme0n1: ios=49098/49152, merge=0/0, ticks=10257/8214, in_queue=18471, util=99.68% 00:17:58.441 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:17:58.441 nvmf hotplug test: fio successful as expected 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:58.441 rmmod nvme_tcp 00:17:58.441 rmmod nvme_fabrics 00:17:58.441 rmmod nvme_keyring 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 88807 ']' 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 88807 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 88807 ']' 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 88807 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88807 00:17:58.441 killing process with pid 88807 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88807' 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 88807 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 88807 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.441 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:17:58.442 00:17:58.442 real 1m4.042s 00:17:58.442 user 3m50.901s 00:17:58.442 sys 0m21.088s 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:58.442 ************************************ 00:17:58.442 END TEST nvmf_initiator_timeout 00:17:58.442 ************************************ 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:58.442 ************************************ 00:17:58.442 START TEST nvmf_nsid 00:17:58.442 ************************************ 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:58.442 * Looking for test storage... 00:17:58.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:58.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.442 --rc genhtml_branch_coverage=1 00:17:58.442 --rc genhtml_function_coverage=1 00:17:58.442 --rc genhtml_legend=1 00:17:58.442 --rc geninfo_all_blocks=1 00:17:58.442 --rc geninfo_unexecuted_blocks=1 00:17:58.442 00:17:58.442 ' 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:58.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.442 --rc genhtml_branch_coverage=1 00:17:58.442 --rc genhtml_function_coverage=1 00:17:58.442 --rc genhtml_legend=1 00:17:58.442 --rc geninfo_all_blocks=1 00:17:58.442 --rc geninfo_unexecuted_blocks=1 00:17:58.442 00:17:58.442 ' 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:58.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.442 --rc genhtml_branch_coverage=1 00:17:58.442 --rc genhtml_function_coverage=1 00:17:58.442 --rc genhtml_legend=1 00:17:58.442 --rc geninfo_all_blocks=1 00:17:58.442 --rc geninfo_unexecuted_blocks=1 00:17:58.442 00:17:58.442 ' 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:58.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.442 --rc genhtml_branch_coverage=1 00:17:58.442 --rc genhtml_function_coverage=1 00:17:58.442 --rc genhtml_legend=1 00:17:58.442 --rc geninfo_all_blocks=1 00:17:58.442 --rc geninfo_unexecuted_blocks=1 00:17:58.442 00:17:58.442 ' 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.442 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:58.443 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:58.443 Cannot find device "nvmf_init_br" 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:58.443 Cannot find device "nvmf_init_br2" 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:58.443 Cannot find device "nvmf_tgt_br" 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:58.443 Cannot find device "nvmf_tgt_br2" 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:58.443 Cannot find device "nvmf_init_br" 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:58.443 Cannot find device "nvmf_init_br2" 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:58.443 Cannot find device "nvmf_tgt_br" 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:58.443 Cannot find device "nvmf_tgt_br2" 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:58.443 Cannot find device "nvmf_br" 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:58.443 Cannot find device "nvmf_init_if" 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:58.443 Cannot find device "nvmf_init_if2" 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:58.443 16:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:58.443 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:58.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:58.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:17:58.444 00:17:58.444 --- 10.0.0.3 ping statistics --- 00:17:58.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.444 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:58.444 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:58.444 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:17:58.444 00:17:58.444 --- 10.0.0.4 ping statistics --- 00:17:58.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.444 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:58.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:58.444 00:17:58.444 --- 10.0.0.1 ping statistics --- 00:17:58.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.444 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:58.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:17:58.444 00:17:58.444 --- 10.0.0.2 ping statistics --- 00:17:58.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.444 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=89765 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 89765 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 89765 ']' 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.444 16:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:58.444 [2024-11-29 16:53:21.219649] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:58.444 [2024-11-29 16:53:21.219745] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.444 [2024-11-29 16:53:21.347153] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:58.444 [2024-11-29 16:53:21.379878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.444 [2024-11-29 16:53:21.402204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.444 [2024-11-29 16:53:21.402284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.444 [2024-11-29 16:53:21.402309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.444 [2024-11-29 16:53:21.402353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.444 [2024-11-29 16:53:21.402363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.444 [2024-11-29 16:53:21.402758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.444 [2024-11-29 16:53:21.436435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:58.444 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.444 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:58.444 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:58.444 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.444 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:58.444 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.444 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:58.444 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=89797 00:17:58.444 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:58.444 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=4953d126-1c6a-40b0-9db7-80575e733251 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=ecddbd5b-6b2c-4a81-938b-4baf6e83df13 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=e327174c-82f3-4cbb-8be3-b8c5853ff6d7 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:58.704 null0 00:17:58.704 null1 00:17:58.704 null2 00:17:58.704 [2024-11-29 16:53:22.279360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.704 [2024-11-29 16:53:22.295960] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:17:58.704 [2024-11-29 16:53:22.296089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89797 ] 00:17:58.704 [2024-11-29 16:53:22.303455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:58.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 89797 /var/tmp/tgt2.sock 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 89797 ']' 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.704 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:58.704 [2024-11-29 16:53:22.422916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:58.704 [2024-11-29 16:53:22.455727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.704 [2024-11-29 16:53:22.479998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.964 [2024-11-29 16:53:22.524593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:58.964 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.964 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:58.964 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:59.532 [2024-11-29 16:53:23.014894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.532 [2024-11-29 16:53:23.030987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:17:59.532 nvme0n1 nvme0n2 00:17:59.532 nvme1n1 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:17:59.532 16:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:18:00.468 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:00.468 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:00.468 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:00.468 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:00.468 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:00.468 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 4953d126-1c6a-40b0-9db7-80575e733251 00:18:00.468 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:00.468 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:18:00.468 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:18:00.468 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:18:00.468 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:00.727 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4953d1261c6a40b09db780575e733251 00:18:00.727 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4953D1261C6A40B09DB780575E733251 00:18:00.727 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 4953D1261C6A40B09DB780575E733251 == \4\9\5\3\D\1\2\6\1\C\6\A\4\0\B\0\9\D\B\7\8\0\5\7\5\E\7\3\3\2\5\1 ]] 00:18:00.727 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:18:00.727 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid ecddbd5b-6b2c-4a81-938b-4baf6e83df13 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ecddbd5b6b2c4a81938b4baf6e83df13 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo ECDDBD5B6B2C4A81938B4BAF6E83DF13 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ ECDDBD5B6B2C4A81938B4BAF6E83DF13 == \E\C\D\D\B\D\5\B\6\B\2\C\4\A\8\1\9\3\8\B\4\B\A\F\6\E\8\3\D\F\1\3 ]] 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid e327174c-82f3-4cbb-8be3-b8c5853ff6d7 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e327174c82f34cbb8be3b8c5853ff6d7 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E327174C82F34CBB8BE3B8C5853FF6D7 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ E327174C82F34CBB8BE3B8C5853FF6D7 == \E\3\2\7\1\7\4\C\8\2\F\3\4\C\B\B\8\B\E\3\B\8\C\5\8\5\3\F\F\6\D\7 ]] 00:18:00.728 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:18:00.987 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:18:00.987 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:18:00.987 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 89797 00:18:00.987 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 89797 ']' 00:18:00.987 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 89797 00:18:00.987 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:18:00.987 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.987 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89797 00:18:00.987 killing process with pid 89797 00:18:00.987 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:00.987 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:00.987 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89797' 00:18:00.987 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 89797 00:18:00.987 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 89797 00:18:01.246 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:18:01.246 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:01.246 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:18:01.246 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:01.246 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:18:01.246 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:01.246 16:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:01.246 rmmod nvme_tcp 00:18:01.246 rmmod nvme_fabrics 00:18:01.246 rmmod nvme_keyring 00:18:01.246 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:01.246 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:18:01.246 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:18:01.246 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 89765 ']' 00:18:01.246 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 89765 00:18:01.246 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 89765 ']' 00:18:01.246 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 89765 00:18:01.246 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:18:01.246 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.246 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89765 00:18:01.520 killing process with pid 89765 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89765' 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 89765 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 89765 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:01.520 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:01.785 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:01.785 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:01.785 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.785 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:01.785 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.785 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.785 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.785 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:18:01.785 00:18:01.785 real 0m4.844s 00:18:01.785 user 0m7.068s 00:18:01.785 sys 0m1.514s 00:18:01.785 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.785 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:01.785 ************************************ 00:18:01.785 END TEST nvmf_nsid 00:18:01.785 ************************************ 00:18:01.785 16:53:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:01.785 ************************************ 00:18:01.785 END TEST nvmf_target_extra 00:18:01.785 ************************************ 00:18:01.785 00:18:01.785 real 6m52.719s 00:18:01.785 user 17m10.257s 00:18:01.785 sys 1m50.977s 00:18:01.785 16:53:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.785 16:53:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:01.785 16:53:25 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:18:01.785 16:53:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:01.785 16:53:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.785 16:53:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:01.785 ************************************ 00:18:01.785 START TEST nvmf_host 00:18:01.785 ************************************ 00:18:01.785 16:53:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:18:02.044 * Looking for test storage... 00:18:02.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:02.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.044 --rc genhtml_branch_coverage=1 00:18:02.044 --rc genhtml_function_coverage=1 00:18:02.044 --rc genhtml_legend=1 00:18:02.044 --rc geninfo_all_blocks=1 00:18:02.044 --rc geninfo_unexecuted_blocks=1 00:18:02.044 00:18:02.044 ' 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:02.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.044 --rc genhtml_branch_coverage=1 00:18:02.044 --rc genhtml_function_coverage=1 00:18:02.044 --rc genhtml_legend=1 00:18:02.044 --rc geninfo_all_blocks=1 00:18:02.044 --rc geninfo_unexecuted_blocks=1 00:18:02.044 00:18:02.044 ' 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:02.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.044 --rc genhtml_branch_coverage=1 00:18:02.044 --rc genhtml_function_coverage=1 00:18:02.044 --rc genhtml_legend=1 00:18:02.044 --rc geninfo_all_blocks=1 00:18:02.044 --rc geninfo_unexecuted_blocks=1 00:18:02.044 00:18:02.044 ' 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:02.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.044 --rc genhtml_branch_coverage=1 00:18:02.044 --rc genhtml_function_coverage=1 00:18:02.044 --rc genhtml_legend=1 00:18:02.044 --rc geninfo_all_blocks=1 00:18:02.044 --rc geninfo_unexecuted_blocks=1 00:18:02.044 00:18:02.044 ' 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.044 16:53:25 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.045 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.045 ************************************ 00:18:02.045 START TEST nvmf_identify 00:18:02.045 ************************************ 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:02.045 * Looking for test storage... 00:18:02.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:18:02.045 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:02.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.304 --rc genhtml_branch_coverage=1 00:18:02.304 --rc genhtml_function_coverage=1 00:18:02.304 --rc genhtml_legend=1 00:18:02.304 --rc geninfo_all_blocks=1 00:18:02.304 --rc geninfo_unexecuted_blocks=1 00:18:02.304 00:18:02.304 ' 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:02.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.304 --rc genhtml_branch_coverage=1 00:18:02.304 --rc genhtml_function_coverage=1 00:18:02.304 --rc genhtml_legend=1 00:18:02.304 --rc geninfo_all_blocks=1 00:18:02.304 --rc geninfo_unexecuted_blocks=1 00:18:02.304 00:18:02.304 ' 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:02.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.304 --rc genhtml_branch_coverage=1 00:18:02.304 --rc genhtml_function_coverage=1 00:18:02.304 --rc genhtml_legend=1 00:18:02.304 --rc geninfo_all_blocks=1 00:18:02.304 --rc geninfo_unexecuted_blocks=1 00:18:02.304 00:18:02.304 ' 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:02.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.304 --rc genhtml_branch_coverage=1 00:18:02.304 --rc genhtml_function_coverage=1 00:18:02.304 --rc genhtml_legend=1 00:18:02.304 --rc geninfo_all_blocks=1 00:18:02.304 --rc geninfo_unexecuted_blocks=1 00:18:02.304 00:18:02.304 ' 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.304 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.305 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:02.305 Cannot find device "nvmf_init_br" 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:02.305 Cannot find device "nvmf_init_br2" 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:02.305 Cannot find device "nvmf_tgt_br" 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:02.305 Cannot find device "nvmf_tgt_br2" 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:18:02.305 16:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:02.305 Cannot find device "nvmf_init_br" 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:02.305 Cannot find device "nvmf_init_br2" 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:02.305 Cannot find device "nvmf_tgt_br" 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:02.305 Cannot find device "nvmf_tgt_br2" 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:02.305 Cannot find device "nvmf_br" 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:02.305 Cannot find device "nvmf_init_if" 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:02.305 Cannot find device "nvmf_init_if2" 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:02.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:02.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:02.305 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:02.595 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:02.596 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:02.596 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:18:02.596 00:18:02.596 --- 10.0.0.3 ping statistics --- 00:18:02.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.596 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:02.596 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:02.596 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:18:02.596 00:18:02.596 --- 10.0.0.4 ping statistics --- 00:18:02.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.596 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:02.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:02.596 00:18:02.596 --- 10.0.0.1 ping statistics --- 00:18:02.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.596 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:02.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:18:02.596 00:18:02.596 --- 10.0.0.2 ping statistics --- 00:18:02.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.596 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=90151 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 90151 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 90151 ']' 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.596 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:02.855 [2024-11-29 16:53:26.417719] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:02.855 [2024-11-29 16:53:26.418047] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.855 [2024-11-29 16:53:26.546147] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:02.855 [2024-11-29 16:53:26.577403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:02.855 [2024-11-29 16:53:26.602963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.855 [2024-11-29 16:53:26.603272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.855 [2024-11-29 16:53:26.603545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.855 [2024-11-29 16:53:26.603827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.855 [2024-11-29 16:53:26.603950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.855 [2024-11-29 16:53:26.604995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.855 [2024-11-29 16:53:26.605119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.855 [2024-11-29 16:53:26.605202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.855 [2024-11-29 16:53:26.605201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:02.855 [2024-11-29 16:53:26.639312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:03.114 [2024-11-29 16:53:26.699821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:03.114 Malloc0 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:03.114 [2024-11-29 16:53:26.815612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:03.114 [ 00:18:03.114 { 00:18:03.114 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:03.114 "subtype": "Discovery", 00:18:03.114 "listen_addresses": [ 00:18:03.114 { 00:18:03.114 "trtype": "TCP", 00:18:03.114 "adrfam": "IPv4", 00:18:03.114 "traddr": "10.0.0.3", 00:18:03.114 "trsvcid": "4420" 00:18:03.114 } 00:18:03.114 ], 00:18:03.114 "allow_any_host": true, 00:18:03.114 "hosts": [] 00:18:03.114 }, 00:18:03.114 { 00:18:03.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.114 "subtype": "NVMe", 00:18:03.114 "listen_addresses": [ 00:18:03.114 { 00:18:03.114 "trtype": "TCP", 00:18:03.114 "adrfam": "IPv4", 00:18:03.114 "traddr": "10.0.0.3", 00:18:03.114 "trsvcid": "4420" 00:18:03.114 } 00:18:03.114 ], 00:18:03.114 "allow_any_host": true, 00:18:03.114 "hosts": [], 00:18:03.114 "serial_number": "SPDK00000000000001", 00:18:03.114 "model_number": "SPDK bdev Controller", 00:18:03.114 "max_namespaces": 32, 00:18:03.114 "min_cntlid": 1, 00:18:03.114 "max_cntlid": 65519, 00:18:03.114 "namespaces": [ 00:18:03.114 { 00:18:03.114 "nsid": 1, 00:18:03.114 "bdev_name": "Malloc0", 00:18:03.114 "name": "Malloc0", 00:18:03.114 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:03.114 "eui64": "ABCDEF0123456789", 00:18:03.114 "uuid": "b39c2ad4-77ab-4609-abfa-02e3eb807171" 00:18:03.114 } 00:18:03.114 ] 00:18:03.114 } 00:18:03.114 ] 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.114 16:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:03.114 [2024-11-29 16:53:26.879874] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:03.114 [2024-11-29 16:53:26.880092] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90179 ] 00:18:03.377 [2024-11-29 16:53:27.002804] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:03.377 [2024-11-29 16:53:27.040377] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:18:03.377 [2024-11-29 16:53:27.040460] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:03.377 [2024-11-29 16:53:27.040468] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:03.377 [2024-11-29 16:53:27.040481] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:03.377 [2024-11-29 16:53:27.040490] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:03.377 [2024-11-29 16:53:27.040814] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:18:03.377 [2024-11-29 16:53:27.040874] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfa7a10 0 00:18:03.377 [2024-11-29 16:53:27.054408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:03.377 [2024-11-29 16:53:27.054433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:03.377 [2024-11-29 16:53:27.054456] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:03.377 [2024-11-29 16:53:27.054460] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:03.377 [2024-11-29 16:53:27.054495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.054502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.054506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa7a10) 00:18:03.377 [2024-11-29 16:53:27.054519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:03.377 [2024-11-29 16:53:27.054550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff180, cid 0, qid 0 00:18:03.377 [2024-11-29 16:53:27.062348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.377 [2024-11-29 16:53:27.062368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.377 [2024-11-29 16:53:27.062390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.062395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff180) on tqpair=0xfa7a10 00:18:03.377 [2024-11-29 16:53:27.062406] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:03.377 [2024-11-29 16:53:27.062414] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:18:03.377 [2024-11-29 16:53:27.062420] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:18:03.377 [2024-11-29 16:53:27.062438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.062444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.062447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa7a10) 00:18:03.377 [2024-11-29 16:53:27.062456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.377 [2024-11-29 16:53:27.062482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff180, cid 0, qid 0 00:18:03.377 [2024-11-29 16:53:27.062538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.377 [2024-11-29 16:53:27.062545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.377 [2024-11-29 16:53:27.062548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.062552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff180) on tqpair=0xfa7a10 00:18:03.377 [2024-11-29 16:53:27.062558] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:18:03.377 [2024-11-29 16:53:27.062565] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:18:03.377 [2024-11-29 16:53:27.062572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.062576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.062580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa7a10) 00:18:03.377 [2024-11-29 16:53:27.062587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.377 [2024-11-29 16:53:27.062620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff180, cid 0, qid 0 00:18:03.377 [2024-11-29 16:53:27.062682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.377 [2024-11-29 16:53:27.062689] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.377 [2024-11-29 16:53:27.062693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.062697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff180) on tqpair=0xfa7a10 00:18:03.377 [2024-11-29 16:53:27.062702] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:18:03.377 [2024-11-29 16:53:27.062711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:18:03.377 [2024-11-29 16:53:27.062718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.062723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.062726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa7a10) 00:18:03.377 [2024-11-29 16:53:27.062734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.377 [2024-11-29 16:53:27.062751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff180, cid 0, qid 0 00:18:03.377 [2024-11-29 16:53:27.062797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.377 [2024-11-29 16:53:27.062803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.377 [2024-11-29 16:53:27.062807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.062811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff180) on tqpair=0xfa7a10 00:18:03.377 [2024-11-29 16:53:27.062817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:03.377 [2024-11-29 16:53:27.062827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.062832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.062835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa7a10) 00:18:03.377 [2024-11-29 16:53:27.062843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.377 [2024-11-29 16:53:27.062859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff180, cid 0, qid 0 00:18:03.377 [2024-11-29 16:53:27.062906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.377 [2024-11-29 16:53:27.062913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.377 [2024-11-29 16:53:27.062917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.062921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff180) on tqpair=0xfa7a10 00:18:03.377 [2024-11-29 16:53:27.062926] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:18:03.377 [2024-11-29 16:53:27.062931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:18:03.377 [2024-11-29 16:53:27.062939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:03.377 [2024-11-29 16:53:27.063044] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:18:03.377 [2024-11-29 16:53:27.063050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:03.377 [2024-11-29 16:53:27.063059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa7a10) 00:18:03.377 [2024-11-29 16:53:27.063074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.377 [2024-11-29 16:53:27.063092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff180, cid 0, qid 0 00:18:03.377 [2024-11-29 16:53:27.063139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.377 [2024-11-29 16:53:27.063146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.377 [2024-11-29 16:53:27.063150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff180) on tqpair=0xfa7a10 00:18:03.377 [2024-11-29 16:53:27.063159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:03.377 [2024-11-29 16:53:27.063169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa7a10) 00:18:03.377 [2024-11-29 16:53:27.063185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.377 [2024-11-29 16:53:27.063202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff180, cid 0, qid 0 00:18:03.377 [2024-11-29 16:53:27.063249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.377 [2024-11-29 16:53:27.063256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.377 [2024-11-29 16:53:27.063259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff180) on tqpair=0xfa7a10 00:18:03.377 [2024-11-29 16:53:27.063268] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:03.377 [2024-11-29 16:53:27.063274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:18:03.377 [2024-11-29 16:53:27.063282] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:18:03.377 [2024-11-29 16:53:27.063292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:18:03.377 [2024-11-29 16:53:27.063302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa7a10) 00:18:03.377 [2024-11-29 16:53:27.063314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.377 [2024-11-29 16:53:27.063348] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff180, cid 0, qid 0 00:18:03.377 [2024-11-29 16:53:27.063444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:03.377 [2024-11-29 16:53:27.063454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:03.377 [2024-11-29 16:53:27.063458] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063462] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfa7a10): datao=0, datal=4096, cccid=0 00:18:03.377 [2024-11-29 16:53:27.063467] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfff180) on tqpair(0xfa7a10): expected_datao=0, payload_size=4096 00:18:03.377 [2024-11-29 16:53:27.063472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063481] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063485] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.377 [2024-11-29 16:53:27.063500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.377 [2024-11-29 16:53:27.063504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff180) on tqpair=0xfa7a10 00:18:03.377 [2024-11-29 16:53:27.063517] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:18:03.377 [2024-11-29 16:53:27.063523] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:18:03.377 [2024-11-29 16:53:27.063527] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:18:03.377 [2024-11-29 16:53:27.063537] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:18:03.377 [2024-11-29 16:53:27.063543] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:18:03.377 [2024-11-29 16:53:27.063548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:18:03.377 [2024-11-29 16:53:27.063558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:18:03.377 [2024-11-29 16:53:27.063566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa7a10) 00:18:03.377 [2024-11-29 16:53:27.063583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:03.377 [2024-11-29 16:53:27.063605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff180, cid 0, qid 0 00:18:03.377 [2024-11-29 16:53:27.063690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.377 [2024-11-29 16:53:27.063698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.377 [2024-11-29 16:53:27.063703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff180) on tqpair=0xfa7a10 00:18:03.377 [2024-11-29 16:53:27.063716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfa7a10) 00:18:03.377 [2024-11-29 16:53:27.063732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.377 [2024-11-29 16:53:27.063739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfa7a10) 00:18:03.377 [2024-11-29 16:53:27.063754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.377 [2024-11-29 16:53:27.063760] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfa7a10) 00:18:03.377 [2024-11-29 16:53:27.063775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.377 [2024-11-29 16:53:27.063781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.377 [2024-11-29 16:53:27.063796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.377 [2024-11-29 16:53:27.063801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:03.377 [2024-11-29 16:53:27.063812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:03.377 [2024-11-29 16:53:27.063820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.063824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfa7a10) 00:18:03.377 [2024-11-29 16:53:27.063831] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.377 [2024-11-29 16:53:27.063859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff180, cid 0, qid 0 00:18:03.377 [2024-11-29 16:53:27.063867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff300, cid 1, qid 0 00:18:03.377 [2024-11-29 16:53:27.063872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff480, cid 2, qid 0 00:18:03.377 [2024-11-29 16:53:27.063878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.377 [2024-11-29 16:53:27.063883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff780, cid 4, qid 0 00:18:03.377 [2024-11-29 16:53:27.063985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.377 [2024-11-29 16:53:27.063993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.377 [2024-11-29 16:53:27.064011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.377 [2024-11-29 16:53:27.064015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff780) on tqpair=0xfa7a10 00:18:03.378 [2024-11-29 16:53:27.064021] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:18:03.378 [2024-11-29 16:53:27.064027] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:18:03.378 [2024-11-29 16:53:27.064038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfa7a10) 00:18:03.378 [2024-11-29 16:53:27.064050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.378 [2024-11-29 16:53:27.064068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff780, cid 4, qid 0 00:18:03.378 [2024-11-29 16:53:27.064127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:03.378 [2024-11-29 16:53:27.064133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:03.378 [2024-11-29 16:53:27.064137] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064141] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfa7a10): datao=0, datal=4096, cccid=4 00:18:03.378 [2024-11-29 16:53:27.064146] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfff780) on tqpair(0xfa7a10): expected_datao=0, payload_size=4096 00:18:03.378 [2024-11-29 16:53:27.064150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064158] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064162] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.378 [2024-11-29 16:53:27.064176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.378 [2024-11-29 16:53:27.064180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff780) on tqpair=0xfa7a10 00:18:03.378 [2024-11-29 16:53:27.064197] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:18:03.378 [2024-11-29 16:53:27.064222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfa7a10) 00:18:03.378 [2024-11-29 16:53:27.064235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.378 [2024-11-29 16:53:27.064243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfa7a10) 00:18:03.378 [2024-11-29 16:53:27.064257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.378 [2024-11-29 16:53:27.064281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff780, cid 4, qid 0 00:18:03.378 [2024-11-29 16:53:27.064288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff900, cid 5, qid 0 00:18:03.378 [2024-11-29 16:53:27.064405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:03.378 [2024-11-29 16:53:27.064414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:03.378 [2024-11-29 16:53:27.064418] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064422] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfa7a10): datao=0, datal=1024, cccid=4 00:18:03.378 [2024-11-29 16:53:27.064426] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfff780) on tqpair(0xfa7a10): expected_datao=0, payload_size=1024 00:18:03.378 [2024-11-29 16:53:27.064431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064437] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064441] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.378 [2024-11-29 16:53:27.064453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.378 [2024-11-29 16:53:27.064457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff900) on tqpair=0xfa7a10 00:18:03.378 [2024-11-29 16:53:27.064480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.378 [2024-11-29 16:53:27.064487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.378 [2024-11-29 16:53:27.064491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff780) on tqpair=0xfa7a10 00:18:03.378 [2024-11-29 16:53:27.064507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfa7a10) 00:18:03.378 [2024-11-29 16:53:27.064519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.378 [2024-11-29 16:53:27.064544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff780, cid 4, qid 0 00:18:03.378 [2024-11-29 16:53:27.064611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:03.378 [2024-11-29 16:53:27.064618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:03.378 [2024-11-29 16:53:27.064622] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064626] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfa7a10): datao=0, datal=3072, cccid=4 00:18:03.378 [2024-11-29 16:53:27.064631] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfff780) on tqpair(0xfa7a10): expected_datao=0, payload_size=3072 00:18:03.378 [2024-11-29 16:53:27.064635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064642] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064646] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.378 [2024-11-29 16:53:27.064661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.378 [2024-11-29 16:53:27.064664] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff780) on tqpair=0xfa7a10 00:18:03.378 [2024-11-29 16:53:27.064678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfa7a10) 00:18:03.378 [2024-11-29 16:53:27.064690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.378 [2024-11-29 16:53:27.064712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff780, cid 4, qid 0 00:18:03.378 [2024-11-29 16:53:27.064782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:03.378 [2024-11-29 16:53:27.064789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:03.378 [2024-11-29 16:53:27.064793] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064797] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfa7a10): datao=0, datal=8, cccid=4 00:18:03.378 [2024-11-29 16:53:27.064801] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfff780) on tqpair(0xfa7a10): expected_datao=0, payload_size=8 00:18:03.378 [2024-11-29 16:53:27.064806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064812] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064816] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.378 [2024-11-29 16:53:27.064837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.378 [2024-11-29 16:53:27.064841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.378 [2024-11-29 16:53:27.064845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff780) on tqpair=0xfa7a10 00:18:03.378 ===================================================== 00:18:03.378 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:03.378 ===================================================== 00:18:03.378 Controller Capabilities/Features 00:18:03.378 ================================ 00:18:03.378 Vendor ID: 0000 00:18:03.378 Subsystem Vendor ID: 0000 00:18:03.378 Serial Number: .................... 00:18:03.378 Model Number: ........................................ 00:18:03.378 Firmware Version: 25.01 00:18:03.378 Recommended Arb Burst: 0 00:18:03.378 IEEE OUI Identifier: 00 00 00 00:18:03.378 Multi-path I/O 00:18:03.378 May have multiple subsystem ports: No 00:18:03.378 May have multiple controllers: No 00:18:03.378 Associated with SR-IOV VF: No 00:18:03.378 Max Data Transfer Size: 131072 00:18:03.378 Max Number of Namespaces: 0 00:18:03.378 Max Number of I/O Queues: 1024 00:18:03.378 NVMe Specification Version (VS): 1.3 00:18:03.378 NVMe Specification Version (Identify): 1.3 00:18:03.378 Maximum Queue Entries: 128 00:18:03.378 Contiguous Queues Required: Yes 00:18:03.378 Arbitration Mechanisms Supported 00:18:03.378 Weighted Round Robin: Not Supported 00:18:03.378 Vendor Specific: Not Supported 00:18:03.378 Reset Timeout: 15000 ms 00:18:03.378 Doorbell Stride: 4 bytes 00:18:03.378 NVM Subsystem Reset: Not Supported 00:18:03.378 Command Sets Supported 00:18:03.378 NVM Command Set: Supported 00:18:03.378 Boot Partition: Not Supported 00:18:03.378 Memory Page Size Minimum: 4096 bytes 00:18:03.378 Memory Page Size Maximum: 4096 bytes 00:18:03.378 Persistent Memory Region: Not Supported 00:18:03.378 Optional Asynchronous Events Supported 00:18:03.378 Namespace Attribute Notices: Not Supported 00:18:03.378 Firmware Activation Notices: Not Supported 00:18:03.378 ANA Change Notices: Not Supported 00:18:03.378 PLE Aggregate Log Change Notices: Not Supported 00:18:03.378 LBA Status Info Alert Notices: Not Supported 00:18:03.378 EGE Aggregate Log Change Notices: Not Supported 00:18:03.378 Normal NVM Subsystem Shutdown event: Not Supported 00:18:03.378 Zone Descriptor Change Notices: Not Supported 00:18:03.378 Discovery Log Change Notices: Supported 00:18:03.378 Controller Attributes 00:18:03.378 128-bit Host Identifier: Not Supported 00:18:03.378 Non-Operational Permissive Mode: Not Supported 00:18:03.378 NVM Sets: Not Supported 00:18:03.378 Read Recovery Levels: Not Supported 00:18:03.378 Endurance Groups: Not Supported 00:18:03.378 Predictable Latency Mode: Not Supported 00:18:03.378 Traffic Based Keep ALive: Not Supported 00:18:03.378 Namespace Granularity: Not Supported 00:18:03.378 SQ Associations: Not Supported 00:18:03.378 UUID List: Not Supported 00:18:03.378 Multi-Domain Subsystem: Not Supported 00:18:03.378 Fixed Capacity Management: Not Supported 00:18:03.378 Variable Capacity Management: Not Supported 00:18:03.378 Delete Endurance Group: Not Supported 00:18:03.378 Delete NVM Set: Not Supported 00:18:03.378 Extended LBA Formats Supported: Not Supported 00:18:03.378 Flexible Data Placement Supported: Not Supported 00:18:03.378 00:18:03.378 Controller Memory Buffer Support 00:18:03.378 ================================ 00:18:03.378 Supported: No 00:18:03.378 00:18:03.378 Persistent Memory Region Support 00:18:03.378 ================================ 00:18:03.378 Supported: No 00:18:03.378 00:18:03.378 Admin Command Set Attributes 00:18:03.378 ============================ 00:18:03.378 Security Send/Receive: Not Supported 00:18:03.378 Format NVM: Not Supported 00:18:03.378 Firmware Activate/Download: Not Supported 00:18:03.378 Namespace Management: Not Supported 00:18:03.378 Device Self-Test: Not Supported 00:18:03.378 Directives: Not Supported 00:18:03.378 NVMe-MI: Not Supported 00:18:03.378 Virtualization Management: Not Supported 00:18:03.378 Doorbell Buffer Config: Not Supported 00:18:03.378 Get LBA Status Capability: Not Supported 00:18:03.378 Command & Feature Lockdown Capability: Not Supported 00:18:03.378 Abort Command Limit: 1 00:18:03.378 Async Event Request Limit: 4 00:18:03.378 Number of Firmware Slots: N/A 00:18:03.378 Firmware Slot 1 Read-Only: N/A 00:18:03.378 Firmware Activation Without Reset: N/A 00:18:03.378 Multiple Update Detection Support: N/A 00:18:03.378 Firmware Update Granularity: No Information Provided 00:18:03.378 Per-Namespace SMART Log: No 00:18:03.378 Asymmetric Namespace Access Log Page: Not Supported 00:18:03.378 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:03.378 Command Effects Log Page: Not Supported 00:18:03.378 Get Log Page Extended Data: Supported 00:18:03.378 Telemetry Log Pages: Not Supported 00:18:03.378 Persistent Event Log Pages: Not Supported 00:18:03.378 Supported Log Pages Log Page: May Support 00:18:03.378 Commands Supported & Effects Log Page: Not Supported 00:18:03.378 Feature Identifiers & Effects Log Page:May Support 00:18:03.378 NVMe-MI Commands & Effects Log Page: May Support 00:18:03.378 Data Area 4 for Telemetry Log: Not Supported 00:18:03.378 Error Log Page Entries Supported: 128 00:18:03.378 Keep Alive: Not Supported 00:18:03.378 00:18:03.378 NVM Command Set Attributes 00:18:03.378 ========================== 00:18:03.378 Submission Queue Entry Size 00:18:03.378 Max: 1 00:18:03.378 Min: 1 00:18:03.378 Completion Queue Entry Size 00:18:03.378 Max: 1 00:18:03.378 Min: 1 00:18:03.378 Number of Namespaces: 0 00:18:03.378 Compare Command: Not Supported 00:18:03.378 Write Uncorrectable Command: Not Supported 00:18:03.378 Dataset Management Command: Not Supported 00:18:03.378 Write Zeroes Command: Not Supported 00:18:03.378 Set Features Save Field: Not Supported 00:18:03.378 Reservations: Not Supported 00:18:03.378 Timestamp: Not Supported 00:18:03.378 Copy: Not Supported 00:18:03.378 Volatile Write Cache: Not Present 00:18:03.378 Atomic Write Unit (Normal): 1 00:18:03.378 Atomic Write Unit (PFail): 1 00:18:03.378 Atomic Compare & Write Unit: 1 00:18:03.378 Fused Compare & Write: Supported 00:18:03.378 Scatter-Gather List 00:18:03.378 SGL Command Set: Supported 00:18:03.378 SGL Keyed: Supported 00:18:03.378 SGL Bit Bucket Descriptor: Not Supported 00:18:03.378 SGL Metadata Pointer: Not Supported 00:18:03.378 Oversized SGL: Not Supported 00:18:03.378 SGL Metadata Address: Not Supported 00:18:03.378 SGL Offset: Supported 00:18:03.378 Transport SGL Data Block: Not Supported 00:18:03.378 Replay Protected Memory Block: Not Supported 00:18:03.378 00:18:03.378 Firmware Slot Information 00:18:03.378 ========================= 00:18:03.378 Active slot: 0 00:18:03.378 00:18:03.378 00:18:03.378 Error Log 00:18:03.378 ========= 00:18:03.378 00:18:03.378 Active Namespaces 00:18:03.378 ================= 00:18:03.378 Discovery Log Page 00:18:03.378 ================== 00:18:03.378 Generation Counter: 2 00:18:03.378 Number of Records: 2 00:18:03.378 Record Format: 0 00:18:03.378 00:18:03.378 Discovery Log Entry 0 00:18:03.378 ---------------------- 00:18:03.378 Transport Type: 3 (TCP) 00:18:03.378 Address Family: 1 (IPv4) 00:18:03.378 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:03.378 Entry Flags: 00:18:03.378 Duplicate Returned Information: 1 00:18:03.378 Explicit Persistent Connection Support for Discovery: 1 00:18:03.378 Transport Requirements: 00:18:03.378 Secure Channel: Not Required 00:18:03.378 Port ID: 0 (0x0000) 00:18:03.378 Controller ID: 65535 (0xffff) 00:18:03.378 Admin Max SQ Size: 128 00:18:03.378 Transport Service Identifier: 4420 00:18:03.378 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:03.378 Transport Address: 10.0.0.3 00:18:03.378 Discovery Log Entry 1 00:18:03.378 ---------------------- 00:18:03.378 Transport Type: 3 (TCP) 00:18:03.378 Address Family: 1 (IPv4) 00:18:03.378 Subsystem Type: 2 (NVM Subsystem) 00:18:03.378 Entry Flags: 00:18:03.378 Duplicate Returned Information: 0 00:18:03.378 Explicit Persistent Connection Support for Discovery: 0 00:18:03.378 Transport Requirements: 00:18:03.378 Secure Channel: Not Required 00:18:03.378 Port ID: 0 (0x0000) 00:18:03.378 Controller ID: 65535 (0xffff) 00:18:03.378 Admin Max SQ Size: 128 00:18:03.378 Transport Service Identifier: 4420 00:18:03.378 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:03.378 Transport Address: 10.0.0.3 [2024-11-29 16:53:27.064932] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:18:03.378 [2024-11-29 16:53:27.064945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff180) on tqpair=0xfa7a10 00:18:03.378 [2024-11-29 16:53:27.064951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.378 [2024-11-29 16:53:27.064957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff300) on tqpair=0xfa7a10 00:18:03.378 [2024-11-29 16:53:27.064962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.379 [2024-11-29 16:53:27.064967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff480) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.064972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.379 [2024-11-29 16:53:27.064977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.064981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.379 [2024-11-29 16:53:27.064993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.064998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.379 [2024-11-29 16:53:27.065010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.379 [2024-11-29 16:53:27.065031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.379 [2024-11-29 16:53:27.065076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.379 [2024-11-29 16:53:27.065084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.379 [2024-11-29 16:53:27.065088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.065099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.379 [2024-11-29 16:53:27.065115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.379 [2024-11-29 16:53:27.065135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.379 [2024-11-29 16:53:27.065202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.379 [2024-11-29 16:53:27.065209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.379 [2024-11-29 16:53:27.065212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.065221] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:18:03.379 [2024-11-29 16:53:27.065226] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:18:03.379 [2024-11-29 16:53:27.065236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.379 [2024-11-29 16:53:27.065252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.379 [2024-11-29 16:53:27.065268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.379 [2024-11-29 16:53:27.065314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.379 [2024-11-29 16:53:27.065321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.379 [2024-11-29 16:53:27.065324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.065355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.379 [2024-11-29 16:53:27.065371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.379 [2024-11-29 16:53:27.065390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.379 [2024-11-29 16:53:27.065440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.379 [2024-11-29 16:53:27.065446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.379 [2024-11-29 16:53:27.065450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.065464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.379 [2024-11-29 16:53:27.065480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.379 [2024-11-29 16:53:27.065496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.379 [2024-11-29 16:53:27.065542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.379 [2024-11-29 16:53:27.065549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.379 [2024-11-29 16:53:27.065552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.065567] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.379 [2024-11-29 16:53:27.065582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.379 [2024-11-29 16:53:27.065598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.379 [2024-11-29 16:53:27.065643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.379 [2024-11-29 16:53:27.065650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.379 [2024-11-29 16:53:27.065654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.065668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.379 [2024-11-29 16:53:27.065683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.379 [2024-11-29 16:53:27.065699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.379 [2024-11-29 16:53:27.065744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.379 [2024-11-29 16:53:27.065751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.379 [2024-11-29 16:53:27.065755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.065769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.379 [2024-11-29 16:53:27.065784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.379 [2024-11-29 16:53:27.065800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.379 [2024-11-29 16:53:27.065846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.379 [2024-11-29 16:53:27.065852] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.379 [2024-11-29 16:53:27.065856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065861] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.065871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.379 [2024-11-29 16:53:27.065887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.379 [2024-11-29 16:53:27.065903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.379 [2024-11-29 16:53:27.065949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.379 [2024-11-29 16:53:27.065956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.379 [2024-11-29 16:53:27.065959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.065973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.065982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.379 [2024-11-29 16:53:27.065989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.379 [2024-11-29 16:53:27.066005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.379 [2024-11-29 16:53:27.066053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.379 [2024-11-29 16:53:27.066060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.379 [2024-11-29 16:53:27.066063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.066067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.066078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.066082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.066086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.379 [2024-11-29 16:53:27.066093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.379 [2024-11-29 16:53:27.066109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.379 [2024-11-29 16:53:27.066157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.379 [2024-11-29 16:53:27.066164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.379 [2024-11-29 16:53:27.066167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.066171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.066181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.066186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.066190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.379 [2024-11-29 16:53:27.066197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.379 [2024-11-29 16:53:27.066212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.379 [2024-11-29 16:53:27.066261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.379 [2024-11-29 16:53:27.066267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.379 [2024-11-29 16:53:27.066271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.066275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.066285] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.066290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.066293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.379 [2024-11-29 16:53:27.066301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.379 [2024-11-29 16:53:27.066316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.379 [2024-11-29 16:53:27.069377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.379 [2024-11-29 16:53:27.069397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.379 [2024-11-29 16:53:27.069419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.069423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.069437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.069443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.069447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfa7a10) 00:18:03.379 [2024-11-29 16:53:27.069455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.379 [2024-11-29 16:53:27.069480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfff600, cid 3, qid 0 00:18:03.379 [2024-11-29 16:53:27.069532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.379 [2024-11-29 16:53:27.069539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.379 [2024-11-29 16:53:27.069542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.379 [2024-11-29 16:53:27.069546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfff600) on tqpair=0xfa7a10 00:18:03.379 [2024-11-29 16:53:27.069555] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:18:03.379 00:18:03.379 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:03.379 [2024-11-29 16:53:27.113083] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:03.379 [2024-11-29 16:53:27.113138] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90181 ] 00:18:03.642 [2024-11-29 16:53:27.234661] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:03.642 [2024-11-29 16:53:27.273466] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:18:03.642 [2024-11-29 16:53:27.273527] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:03.642 [2024-11-29 16:53:27.273535] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:03.642 [2024-11-29 16:53:27.273548] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:03.642 [2024-11-29 16:53:27.273557] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:03.642 [2024-11-29 16:53:27.273808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:18:03.642 [2024-11-29 16:53:27.273853] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x171da10 0 00:18:03.642 [2024-11-29 16:53:27.284353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:03.642 [2024-11-29 16:53:27.284376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:03.642 [2024-11-29 16:53:27.284383] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:03.642 [2024-11-29 16:53:27.284387] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:03.642 [2024-11-29 16:53:27.284420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.284427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.284432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x171da10) 00:18:03.642 [2024-11-29 16:53:27.284445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:03.642 [2024-11-29 16:53:27.284478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775180, cid 0, qid 0 00:18:03.642 [2024-11-29 16:53:27.293421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.642 [2024-11-29 16:53:27.293442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.642 [2024-11-29 16:53:27.293447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.293452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775180) on tqpair=0x171da10 00:18:03.642 [2024-11-29 16:53:27.293463] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:03.642 [2024-11-29 16:53:27.293472] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:18:03.642 [2024-11-29 16:53:27.293479] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:18:03.642 [2024-11-29 16:53:27.293497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.293502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.293506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x171da10) 00:18:03.642 [2024-11-29 16:53:27.293517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.642 [2024-11-29 16:53:27.293546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775180, cid 0, qid 0 00:18:03.642 [2024-11-29 16:53:27.293606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.642 [2024-11-29 16:53:27.293614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.642 [2024-11-29 16:53:27.293618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.293622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775180) on tqpair=0x171da10 00:18:03.642 [2024-11-29 16:53:27.293628] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:18:03.642 [2024-11-29 16:53:27.293636] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:18:03.642 [2024-11-29 16:53:27.293645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.293649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.293653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x171da10) 00:18:03.642 [2024-11-29 16:53:27.293662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.642 [2024-11-29 16:53:27.293683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775180, cid 0, qid 0 00:18:03.642 [2024-11-29 16:53:27.293760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.642 [2024-11-29 16:53:27.293767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.642 [2024-11-29 16:53:27.293771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.293775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775180) on tqpair=0x171da10 00:18:03.642 [2024-11-29 16:53:27.293780] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:18:03.642 [2024-11-29 16:53:27.293789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:18:03.642 [2024-11-29 16:53:27.293797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.293801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.293805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x171da10) 00:18:03.642 [2024-11-29 16:53:27.293812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.642 [2024-11-29 16:53:27.293830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775180, cid 0, qid 0 00:18:03.642 [2024-11-29 16:53:27.293875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.642 [2024-11-29 16:53:27.293882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.642 [2024-11-29 16:53:27.293885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.293889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775180) on tqpair=0x171da10 00:18:03.642 [2024-11-29 16:53:27.293895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:03.642 [2024-11-29 16:53:27.293906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.293910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.293914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x171da10) 00:18:03.642 [2024-11-29 16:53:27.293922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.642 [2024-11-29 16:53:27.293940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775180, cid 0, qid 0 00:18:03.642 [2024-11-29 16:53:27.293984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.642 [2024-11-29 16:53:27.293990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.642 [2024-11-29 16:53:27.293994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.293998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775180) on tqpair=0x171da10 00:18:03.642 [2024-11-29 16:53:27.294003] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:18:03.642 [2024-11-29 16:53:27.294009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:18:03.642 [2024-11-29 16:53:27.294017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:03.642 [2024-11-29 16:53:27.294123] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:18:03.642 [2024-11-29 16:53:27.294134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:03.642 [2024-11-29 16:53:27.294143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.294148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.294152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x171da10) 00:18:03.642 [2024-11-29 16:53:27.294160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.642 [2024-11-29 16:53:27.294179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775180, cid 0, qid 0 00:18:03.642 [2024-11-29 16:53:27.294222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.642 [2024-11-29 16:53:27.294230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.642 [2024-11-29 16:53:27.294234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.294238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775180) on tqpair=0x171da10 00:18:03.642 [2024-11-29 16:53:27.294243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:03.642 [2024-11-29 16:53:27.294254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.294258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.294262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x171da10) 00:18:03.642 [2024-11-29 16:53:27.294270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.642 [2024-11-29 16:53:27.294288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775180, cid 0, qid 0 00:18:03.642 [2024-11-29 16:53:27.294377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.642 [2024-11-29 16:53:27.294389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.642 [2024-11-29 16:53:27.294394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.294398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775180) on tqpair=0x171da10 00:18:03.642 [2024-11-29 16:53:27.294404] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:03.642 [2024-11-29 16:53:27.294409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:18:03.642 [2024-11-29 16:53:27.294419] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:18:03.642 [2024-11-29 16:53:27.294429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:18:03.642 [2024-11-29 16:53:27.294440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.294445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x171da10) 00:18:03.642 [2024-11-29 16:53:27.294453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.642 [2024-11-29 16:53:27.294477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775180, cid 0, qid 0 00:18:03.642 [2024-11-29 16:53:27.294569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:03.642 [2024-11-29 16:53:27.294582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:03.642 [2024-11-29 16:53:27.294587] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.294591] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x171da10): datao=0, datal=4096, cccid=0 00:18:03.642 [2024-11-29 16:53:27.294596] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1775180) on tqpair(0x171da10): expected_datao=0, payload_size=4096 00:18:03.642 [2024-11-29 16:53:27.294602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.294610] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.294615] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.294624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.642 [2024-11-29 16:53:27.294631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.642 [2024-11-29 16:53:27.294634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.642 [2024-11-29 16:53:27.294639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775180) on tqpair=0x171da10 00:18:03.642 [2024-11-29 16:53:27.294647] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:18:03.642 [2024-11-29 16:53:27.294653] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:18:03.643 [2024-11-29 16:53:27.294658] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:18:03.643 [2024-11-29 16:53:27.294669] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:18:03.643 [2024-11-29 16:53:27.294675] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:18:03.643 [2024-11-29 16:53:27.294681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.294691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.294699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.294703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.294708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.294716] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:03.643 [2024-11-29 16:53:27.294754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775180, cid 0, qid 0 00:18:03.643 [2024-11-29 16:53:27.294821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.643 [2024-11-29 16:53:27.294828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.643 [2024-11-29 16:53:27.294832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.294836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775180) on tqpair=0x171da10 00:18:03.643 [2024-11-29 16:53:27.294843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.294848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.294851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.294858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.643 [2024-11-29 16:53:27.294865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.294869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.294873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.294878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.643 [2024-11-29 16:53:27.294885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.294889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.294893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.294898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.643 [2024-11-29 16:53:27.294905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.294908] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.294912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.294918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.643 [2024-11-29 16:53:27.294924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.294932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.294940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.294944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.294951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.643 [2024-11-29 16:53:27.294976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775180, cid 0, qid 0 00:18:03.643 [2024-11-29 16:53:27.294983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775300, cid 1, qid 0 00:18:03.643 [2024-11-29 16:53:27.294988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775480, cid 2, qid 0 00:18:03.643 [2024-11-29 16:53:27.294993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.643 [2024-11-29 16:53:27.294998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775780, cid 4, qid 0 00:18:03.643 [2024-11-29 16:53:27.295082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.643 [2024-11-29 16:53:27.295094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.643 [2024-11-29 16:53:27.295098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775780) on tqpair=0x171da10 00:18:03.643 [2024-11-29 16:53:27.295108] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:18:03.643 [2024-11-29 16:53:27.295114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.295123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.295130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.295137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.295153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:03.643 [2024-11-29 16:53:27.295172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775780, cid 4, qid 0 00:18:03.643 [2024-11-29 16:53:27.295219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.643 [2024-11-29 16:53:27.295231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.643 [2024-11-29 16:53:27.295235] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295239] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775780) on tqpair=0x171da10 00:18:03.643 [2024-11-29 16:53:27.295303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.295319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.295373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.295388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.643 [2024-11-29 16:53:27.295411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775780, cid 4, qid 0 00:18:03.643 [2024-11-29 16:53:27.295477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:03.643 [2024-11-29 16:53:27.295485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:03.643 [2024-11-29 16:53:27.295489] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295493] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x171da10): datao=0, datal=4096, cccid=4 00:18:03.643 [2024-11-29 16:53:27.295498] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1775780) on tqpair(0x171da10): expected_datao=0, payload_size=4096 00:18:03.643 [2024-11-29 16:53:27.295504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295511] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295516] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.643 [2024-11-29 16:53:27.295531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.643 [2024-11-29 16:53:27.295535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775780) on tqpair=0x171da10 00:18:03.643 [2024-11-29 16:53:27.295550] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:18:03.643 [2024-11-29 16:53:27.295563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.295575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.295583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.295596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.643 [2024-11-29 16:53:27.295617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775780, cid 4, qid 0 00:18:03.643 [2024-11-29 16:53:27.295710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:03.643 [2024-11-29 16:53:27.295719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:03.643 [2024-11-29 16:53:27.295723] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295727] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x171da10): datao=0, datal=4096, cccid=4 00:18:03.643 [2024-11-29 16:53:27.295732] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1775780) on tqpair(0x171da10): expected_datao=0, payload_size=4096 00:18:03.643 [2024-11-29 16:53:27.295737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295745] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295749] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.643 [2024-11-29 16:53:27.295764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.643 [2024-11-29 16:53:27.295768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775780) on tqpair=0x171da10 00:18:03.643 [2024-11-29 16:53:27.295789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.295802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.295811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.295824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.643 [2024-11-29 16:53:27.295845] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775780, cid 4, qid 0 00:18:03.643 [2024-11-29 16:53:27.295911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:03.643 [2024-11-29 16:53:27.295918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:03.643 [2024-11-29 16:53:27.295922] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295926] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x171da10): datao=0, datal=4096, cccid=4 00:18:03.643 [2024-11-29 16:53:27.295932] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1775780) on tqpair(0x171da10): expected_datao=0, payload_size=4096 00:18:03.643 [2024-11-29 16:53:27.295936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295944] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295948] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.643 [2024-11-29 16:53:27.295978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.643 [2024-11-29 16:53:27.295982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.295986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775780) on tqpair=0x171da10 00:18:03.643 [2024-11-29 16:53:27.295995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.296019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.296029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.296036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.296043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.296049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.296054] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:18:03.643 [2024-11-29 16:53:27.296059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:18:03.643 [2024-11-29 16:53:27.296064] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:18:03.643 [2024-11-29 16:53:27.296080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.296085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.296092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.643 [2024-11-29 16:53:27.296100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.296104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.296108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.296114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.643 [2024-11-29 16:53:27.296139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775780, cid 4, qid 0 00:18:03.643 [2024-11-29 16:53:27.296146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775900, cid 5, qid 0 00:18:03.643 [2024-11-29 16:53:27.296248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.643 [2024-11-29 16:53:27.296254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.643 [2024-11-29 16:53:27.296258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.296262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775780) on tqpair=0x171da10 00:18:03.643 [2024-11-29 16:53:27.296269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.643 [2024-11-29 16:53:27.296275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.643 [2024-11-29 16:53:27.296279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.296283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775900) on tqpair=0x171da10 00:18:03.643 [2024-11-29 16:53:27.296293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.296298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.296305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.643 [2024-11-29 16:53:27.296323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775900, cid 5, qid 0 00:18:03.643 [2024-11-29 16:53:27.296470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.643 [2024-11-29 16:53:27.296480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.643 [2024-11-29 16:53:27.296484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.296488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775900) on tqpair=0x171da10 00:18:03.643 [2024-11-29 16:53:27.296499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.296504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.296512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.643 [2024-11-29 16:53:27.296534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775900, cid 5, qid 0 00:18:03.643 [2024-11-29 16:53:27.296632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.643 [2024-11-29 16:53:27.296640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.643 [2024-11-29 16:53:27.296644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.296648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775900) on tqpair=0x171da10 00:18:03.643 [2024-11-29 16:53:27.296659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.296664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.296671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.643 [2024-11-29 16:53:27.296690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775900, cid 5, qid 0 00:18:03.643 [2024-11-29 16:53:27.296807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.643 [2024-11-29 16:53:27.296813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.643 [2024-11-29 16:53:27.296817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.296821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775900) on tqpair=0x171da10 00:18:03.643 [2024-11-29 16:53:27.296839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.296844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.296852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.643 [2024-11-29 16:53:27.296860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.643 [2024-11-29 16:53:27.296864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x171da10) 00:18:03.643 [2024-11-29 16:53:27.296870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.643 [2024-11-29 16:53:27.296878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.296882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x171da10) 00:18:03.644 [2024-11-29 16:53:27.296888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.644 [2024-11-29 16:53:27.296896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.296900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x171da10) 00:18:03.644 [2024-11-29 16:53:27.296906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.644 [2024-11-29 16:53:27.296926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775900, cid 5, qid 0 00:18:03.644 [2024-11-29 16:53:27.296934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775780, cid 4, qid 0 00:18:03.644 [2024-11-29 16:53:27.296939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775a80, cid 6, qid 0 00:18:03.644 [2024-11-29 16:53:27.296944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775c00, cid 7, qid 0 00:18:03.644 [2024-11-29 16:53:27.297166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:03.644 [2024-11-29 16:53:27.297173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:03.644 [2024-11-29 16:53:27.297176] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297180] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x171da10): datao=0, datal=8192, cccid=5 00:18:03.644 [2024-11-29 16:53:27.297186] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1775900) on tqpair(0x171da10): expected_datao=0, payload_size=8192 00:18:03.644 [2024-11-29 16:53:27.297190] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297207] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297212] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:03.644 [2024-11-29 16:53:27.297223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:03.644 [2024-11-29 16:53:27.297227] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297231] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x171da10): datao=0, datal=512, cccid=4 00:18:03.644 [2024-11-29 16:53:27.297235] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1775780) on tqpair(0x171da10): expected_datao=0, payload_size=512 00:18:03.644 [2024-11-29 16:53:27.297240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297246] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297250] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:03.644 [2024-11-29 16:53:27.297262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:03.644 [2024-11-29 16:53:27.297265] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297269] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x171da10): datao=0, datal=512, cccid=6 00:18:03.644 [2024-11-29 16:53:27.297273] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1775a80) on tqpair(0x171da10): expected_datao=0, payload_size=512 00:18:03.644 [2024-11-29 16:53:27.297278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297284] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297288] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:03.644 [2024-11-29 16:53:27.297299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:03.644 [2024-11-29 16:53:27.297303] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297307] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x171da10): datao=0, datal=4096, cccid=7 00:18:03.644 [2024-11-29 16:53:27.297311] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1775c00) on tqpair(0x171da10): expected_datao=0, payload_size=4096 00:18:03.644 [2024-11-29 16:53:27.297316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297322] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297326] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.644 [2024-11-29 16:53:27.297374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.644 [2024-11-29 16:53:27.297378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.297382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775900) on tqpair=0x171da10 00:18:03.644 [2024-11-29 16:53:27.301424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.644 [2024-11-29 16:53:27.301438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.644 [2024-11-29 16:53:27.301442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.301446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775780) on tqpair=0x171da10 00:18:03.644 ===================================================== 00:18:03.644 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:03.644 ===================================================== 00:18:03.644 Controller Capabilities/Features 00:18:03.644 ================================ 00:18:03.644 Vendor ID: 8086 00:18:03.644 Subsystem Vendor ID: 8086 00:18:03.644 Serial Number: SPDK00000000000001 00:18:03.644 Model Number: SPDK bdev Controller 00:18:03.644 Firmware Version: 25.01 00:18:03.644 Recommended Arb Burst: 6 00:18:03.644 IEEE OUI Identifier: e4 d2 5c 00:18:03.644 Multi-path I/O 00:18:03.644 May have multiple subsystem ports: Yes 00:18:03.644 May have multiple controllers: Yes 00:18:03.644 Associated with SR-IOV VF: No 00:18:03.644 Max Data Transfer Size: 131072 00:18:03.644 Max Number of Namespaces: 32 00:18:03.644 Max Number of I/O Queues: 127 00:18:03.644 NVMe Specification Version (VS): 1.3 00:18:03.644 NVMe Specification Version (Identify): 1.3 00:18:03.644 Maximum Queue Entries: 128 00:18:03.644 Contiguous Queues Required: Yes 00:18:03.644 Arbitration Mechanisms Supported 00:18:03.644 Weighted Round Robin: Not Supported 00:18:03.644 Vendor Specific: Not Supported 00:18:03.644 Reset Timeout: 15000 ms 00:18:03.644 Doorbell Stride: 4 bytes 00:18:03.644 NVM Subsystem Reset: Not Supported 00:18:03.644 Command Sets Supported 00:18:03.644 NVM Command Set: Supported 00:18:03.644 Boot Partition: Not Supported 00:18:03.644 Memory Page Size Minimum: 4096 bytes 00:18:03.644 Memory Page Size Maximum: 4096 bytes 00:18:03.644 Persistent Memory Region: Not Supported 00:18:03.644 Optional Asynchronous Events Supported 00:18:03.644 Namespace Attribute Notices: Supported 00:18:03.644 Firmware Activation Notices: Not Supported 00:18:03.644 ANA Change Notices: Not Supported 00:18:03.644 PLE Aggregate Log Change Notices: Not Supported 00:18:03.644 LBA Status Info Alert Notices: Not Supported 00:18:03.644 EGE Aggregate Log Change Notices: Not Supported 00:18:03.644 Normal NVM Subsystem Shutdown event: Not Supported 00:18:03.644 Zone Descriptor Change Notices: Not Supported 00:18:03.644 Discovery Log Change Notices: Not Supported 00:18:03.644 Controller Attributes 00:18:03.644 128-bit Host Identifier: Supported 00:18:03.644 Non-Operational Permissive Mode: Not Supported 00:18:03.644 NVM Sets: Not Supported 00:18:03.644 Read Recovery Levels: Not Supported 00:18:03.644 Endurance Groups: Not Supported 00:18:03.644 Predictable Latency Mode: Not Supported 00:18:03.644 Traffic Based Keep ALive: Not Supported 00:18:03.644 Namespace Granularity: Not Supported 00:18:03.644 SQ Associations: Not Supported 00:18:03.644 UUID List: Not Supported 00:18:03.644 Multi-Domain Subsystem: Not Supported 00:18:03.644 Fixed Capacity Management: Not Supported 00:18:03.644 Variable Capacity Management: Not Supported 00:18:03.644 Delete Endurance Group: Not Supported 00:18:03.644 Delete NVM Set: Not Supported 00:18:03.644 Extended LBA Formats Supported: Not Supported 00:18:03.644 Flexible Data Placement Supported: Not Supported 00:18:03.644 00:18:03.644 Controller Memory Buffer Support 00:18:03.644 ================================ 00:18:03.644 Supported: No 00:18:03.644 00:18:03.644 Persistent Memory Region Support 00:18:03.644 ================================ 00:18:03.644 Supported: No 00:18:03.644 00:18:03.644 Admin Command Set Attributes 00:18:03.644 ============================ 00:18:03.644 Security Send/Receive: Not Supported 00:18:03.644 Format NVM: Not Supported 00:18:03.644 Firmware Activate/Download: Not Supported 00:18:03.644 Namespace Management: Not Supported 00:18:03.644 Device Self-Test: Not Supported 00:18:03.644 Directives: Not Supported 00:18:03.644 NVMe-MI: Not Supported 00:18:03.644 Virtualization Management: Not Supported 00:18:03.644 Doorbell Buffer Config: Not Supported 00:18:03.644 Get LBA Status Capability: Not Supported 00:18:03.644 Command & Feature Lockdown Capability: Not Supported 00:18:03.644 Abort Command Limit: 4 00:18:03.644 Async Event Request Limit: 4 00:18:03.644 Number of Firmware Slots: N/A 00:18:03.644 Firmware Slot 1 Read-Only: N/A 00:18:03.644 Firmware Activation Without Reset: [2024-11-29 16:53:27.301459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.644 [2024-11-29 16:53:27.301465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.644 [2024-11-29 16:53:27.301469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.301474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775a80) on tqpair=0x171da10 00:18:03.644 [2024-11-29 16:53:27.301481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.644 [2024-11-29 16:53:27.301488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.644 [2024-11-29 16:53:27.301492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.301496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775c00) on tqpair=0x171da10 00:18:03.644 N/A 00:18:03.644 Multiple Update Detection Support: N/A 00:18:03.644 Firmware Update Granularity: No Information Provided 00:18:03.644 Per-Namespace SMART Log: No 00:18:03.644 Asymmetric Namespace Access Log Page: Not Supported 00:18:03.644 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:03.644 Command Effects Log Page: Supported 00:18:03.644 Get Log Page Extended Data: Supported 00:18:03.644 Telemetry Log Pages: Not Supported 00:18:03.644 Persistent Event Log Pages: Not Supported 00:18:03.644 Supported Log Pages Log Page: May Support 00:18:03.644 Commands Supported & Effects Log Page: Not Supported 00:18:03.644 Feature Identifiers & Effects Log Page:May Support 00:18:03.644 NVMe-MI Commands & Effects Log Page: May Support 00:18:03.644 Data Area 4 for Telemetry Log: Not Supported 00:18:03.644 Error Log Page Entries Supported: 128 00:18:03.644 Keep Alive: Supported 00:18:03.644 Keep Alive Granularity: 10000 ms 00:18:03.644 00:18:03.644 NVM Command Set Attributes 00:18:03.644 ========================== 00:18:03.644 Submission Queue Entry Size 00:18:03.644 Max: 64 00:18:03.644 Min: 64 00:18:03.644 Completion Queue Entry Size 00:18:03.644 Max: 16 00:18:03.644 Min: 16 00:18:03.644 Number of Namespaces: 32 00:18:03.644 Compare Command: Supported 00:18:03.644 Write Uncorrectable Command: Not Supported 00:18:03.644 Dataset Management Command: Supported 00:18:03.644 Write Zeroes Command: Supported 00:18:03.644 Set Features Save Field: Not Supported 00:18:03.644 Reservations: Supported 00:18:03.644 Timestamp: Not Supported 00:18:03.644 Copy: Supported 00:18:03.644 Volatile Write Cache: Present 00:18:03.644 Atomic Write Unit (Normal): 1 00:18:03.644 Atomic Write Unit (PFail): 1 00:18:03.644 Atomic Compare & Write Unit: 1 00:18:03.644 Fused Compare & Write: Supported 00:18:03.644 Scatter-Gather List 00:18:03.644 SGL Command Set: Supported 00:18:03.644 SGL Keyed: Supported 00:18:03.644 SGL Bit Bucket Descriptor: Not Supported 00:18:03.644 SGL Metadata Pointer: Not Supported 00:18:03.644 Oversized SGL: Not Supported 00:18:03.644 SGL Metadata Address: Not Supported 00:18:03.644 SGL Offset: Supported 00:18:03.644 Transport SGL Data Block: Not Supported 00:18:03.644 Replay Protected Memory Block: Not Supported 00:18:03.644 00:18:03.644 Firmware Slot Information 00:18:03.644 ========================= 00:18:03.644 Active slot: 1 00:18:03.644 Slot 1 Firmware Revision: 25.01 00:18:03.644 00:18:03.644 00:18:03.644 Commands Supported and Effects 00:18:03.644 ============================== 00:18:03.644 Admin Commands 00:18:03.644 -------------- 00:18:03.644 Get Log Page (02h): Supported 00:18:03.644 Identify (06h): Supported 00:18:03.644 Abort (08h): Supported 00:18:03.644 Set Features (09h): Supported 00:18:03.644 Get Features (0Ah): Supported 00:18:03.644 Asynchronous Event Request (0Ch): Supported 00:18:03.644 Keep Alive (18h): Supported 00:18:03.644 I/O Commands 00:18:03.644 ------------ 00:18:03.644 Flush (00h): Supported LBA-Change 00:18:03.644 Write (01h): Supported LBA-Change 00:18:03.644 Read (02h): Supported 00:18:03.644 Compare (05h): Supported 00:18:03.644 Write Zeroes (08h): Supported LBA-Change 00:18:03.644 Dataset Management (09h): Supported LBA-Change 00:18:03.644 Copy (19h): Supported LBA-Change 00:18:03.644 00:18:03.644 Error Log 00:18:03.644 ========= 00:18:03.644 00:18:03.644 Arbitration 00:18:03.644 =========== 00:18:03.644 Arbitration Burst: 1 00:18:03.644 00:18:03.644 Power Management 00:18:03.644 ================ 00:18:03.644 Number of Power States: 1 00:18:03.644 Current Power State: Power State #0 00:18:03.644 Power State #0: 00:18:03.644 Max Power: 0.00 W 00:18:03.644 Non-Operational State: Operational 00:18:03.644 Entry Latency: Not Reported 00:18:03.644 Exit Latency: Not Reported 00:18:03.644 Relative Read Throughput: 0 00:18:03.644 Relative Read Latency: 0 00:18:03.644 Relative Write Throughput: 0 00:18:03.644 Relative Write Latency: 0 00:18:03.644 Idle Power: Not Reported 00:18:03.644 Active Power: Not Reported 00:18:03.644 Non-Operational Permissive Mode: Not Supported 00:18:03.644 00:18:03.644 Health Information 00:18:03.644 ================== 00:18:03.644 Critical Warnings: 00:18:03.644 Available Spare Space: OK 00:18:03.644 Temperature: OK 00:18:03.644 Device Reliability: OK 00:18:03.644 Read Only: No 00:18:03.644 Volatile Memory Backup: OK 00:18:03.644 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:03.644 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:03.644 Available Spare: 0% 00:18:03.644 Available Spare Threshold: 0% 00:18:03.644 Life Percentage Used:[2024-11-29 16:53:27.301608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.644 [2024-11-29 16:53:27.301616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x171da10) 00:18:03.644 [2024-11-29 16:53:27.301626] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.644 [2024-11-29 16:53:27.301656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775c00, cid 7, qid 0 00:18:03.644 [2024-11-29 16:53:27.301798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.644 [2024-11-29 16:53:27.301811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.644 [2024-11-29 16:53:27.301815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.301820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775c00) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.301858] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:18:03.645 [2024-11-29 16:53:27.301870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775180) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.301877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.645 [2024-11-29 16:53:27.301883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775300) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.301888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.645 [2024-11-29 16:53:27.301893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775480) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.301897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.645 [2024-11-29 16:53:27.301902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.301907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.645 [2024-11-29 16:53:27.301916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.301921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.301924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.301933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.301956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.302037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.302044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.302047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.302059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.302075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.302096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.302206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.302213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.302217] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.302226] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:18:03.645 [2024-11-29 16:53:27.302231] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:18:03.645 [2024-11-29 16:53:27.302241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.302257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.302274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.302382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.302395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.302400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.302417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.302434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.302455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.302538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.302545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.302549] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.302564] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.302581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.302600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.302699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.302721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.302725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.302739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.302755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.302772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.302849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.302856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.302859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.302873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.302890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.302907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.302979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.302986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.302989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.302993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.303003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.303019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.303036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.303108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.303114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.303118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.303132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.303148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.303165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.303245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.303252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.303255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.303270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.303285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.303302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.303424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.303432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.303436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.303452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.303469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.303489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.303572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.303579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.303583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.303598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.303615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.303644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.303731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.303738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.303742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.303758] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303767] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.303775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.303793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.303876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.303883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.303887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.303902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.303911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.303919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.303937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.304059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.304071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.304075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.304079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.304089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.304094] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.304098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.304105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.304122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.304194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.304200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.304204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.304208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.304218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.304222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.304226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.304233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.304250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.304350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.304359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.645 [2024-11-29 16:53:27.304362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.304367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.645 [2024-11-29 16:53:27.304378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.304383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.645 [2024-11-29 16:53:27.304387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.645 [2024-11-29 16:53:27.304395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.645 [2024-11-29 16:53:27.304415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.645 [2024-11-29 16:53:27.304500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.645 [2024-11-29 16:53:27.304507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.646 [2024-11-29 16:53:27.304511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.304515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.646 [2024-11-29 16:53:27.304526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.304531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.304535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.646 [2024-11-29 16:53:27.304542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.646 [2024-11-29 16:53:27.304560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.646 [2024-11-29 16:53:27.304640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.646 [2024-11-29 16:53:27.304647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.646 [2024-11-29 16:53:27.304652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.304656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.646 [2024-11-29 16:53:27.304667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.304672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.304676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.646 [2024-11-29 16:53:27.304698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.646 [2024-11-29 16:53:27.304730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.646 [2024-11-29 16:53:27.304814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.646 [2024-11-29 16:53:27.304820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.646 [2024-11-29 16:53:27.304824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.304828] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.646 [2024-11-29 16:53:27.304838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.304842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.304846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.646 [2024-11-29 16:53:27.304853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.646 [2024-11-29 16:53:27.304869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.646 [2024-11-29 16:53:27.304935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.646 [2024-11-29 16:53:27.304941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.646 [2024-11-29 16:53:27.304945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.304949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.646 [2024-11-29 16:53:27.304959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.304963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.304967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.646 [2024-11-29 16:53:27.304974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.646 [2024-11-29 16:53:27.304991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.646 [2024-11-29 16:53:27.305068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.646 [2024-11-29 16:53:27.305074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.646 [2024-11-29 16:53:27.305078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.305082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.646 [2024-11-29 16:53:27.305092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.305096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.305100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.646 [2024-11-29 16:53:27.305107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.646 [2024-11-29 16:53:27.305124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.646 [2024-11-29 16:53:27.305197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.646 [2024-11-29 16:53:27.305204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.646 [2024-11-29 16:53:27.305208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.305212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.646 [2024-11-29 16:53:27.305223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.305227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.305231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.646 [2024-11-29 16:53:27.305238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.646 [2024-11-29 16:53:27.305255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.646 [2024-11-29 16:53:27.305323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.646 [2024-11-29 16:53:27.305329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.646 [2024-11-29 16:53:27.305349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.305353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.646 [2024-11-29 16:53:27.305379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.312431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.312440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x171da10) 00:18:03.646 [2024-11-29 16:53:27.312449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.646 [2024-11-29 16:53:27.312475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1775600, cid 3, qid 0 00:18:03.646 [2024-11-29 16:53:27.312592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:03.646 [2024-11-29 16:53:27.312600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:03.646 [2024-11-29 16:53:27.312603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:03.646 [2024-11-29 16:53:27.312607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1775600) on tqpair=0x171da10 00:18:03.646 [2024-11-29 16:53:27.312616] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 10 milliseconds 00:18:03.646 0% 00:18:03.646 Data Units Read: 0 00:18:03.646 Data Units Written: 0 00:18:03.646 Host Read Commands: 0 00:18:03.646 Host Write Commands: 0 00:18:03.646 Controller Busy Time: 0 minutes 00:18:03.646 Power Cycles: 0 00:18:03.646 Power On Hours: 0 hours 00:18:03.646 Unsafe Shutdowns: 0 00:18:03.646 Unrecoverable Media Errors: 0 00:18:03.646 Lifetime Error Log Entries: 0 00:18:03.646 Warning Temperature Time: 0 minutes 00:18:03.646 Critical Temperature Time: 0 minutes 00:18:03.646 00:18:03.646 Number of Queues 00:18:03.646 ================ 00:18:03.646 Number of I/O Submission Queues: 127 00:18:03.646 Number of I/O Completion Queues: 127 00:18:03.646 00:18:03.646 Active Namespaces 00:18:03.646 ================= 00:18:03.646 Namespace ID:1 00:18:03.646 Error Recovery Timeout: Unlimited 00:18:03.646 Command Set Identifier: NVM (00h) 00:18:03.646 Deallocate: Supported 00:18:03.646 Deallocated/Unwritten Error: Not Supported 00:18:03.646 Deallocated Read Value: Unknown 00:18:03.646 Deallocate in Write Zeroes: Not Supported 00:18:03.646 Deallocated Guard Field: 0xFFFF 00:18:03.646 Flush: Supported 00:18:03.646 Reservation: Supported 00:18:03.646 Namespace Sharing Capabilities: Multiple Controllers 00:18:03.646 Size (in LBAs): 131072 (0GiB) 00:18:03.646 Capacity (in LBAs): 131072 (0GiB) 00:18:03.646 Utilization (in LBAs): 131072 (0GiB) 00:18:03.646 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:03.646 EUI64: ABCDEF0123456789 00:18:03.646 UUID: b39c2ad4-77ab-4609-abfa-02e3eb807171 00:18:03.646 Thin Provisioning: Not Supported 00:18:03.646 Per-NS Atomic Units: Yes 00:18:03.646 Atomic Boundary Size (Normal): 0 00:18:03.646 Atomic Boundary Size (PFail): 0 00:18:03.646 Atomic Boundary Offset: 0 00:18:03.646 Maximum Single Source Range Length: 65535 00:18:03.646 Maximum Copy Length: 65535 00:18:03.646 Maximum Source Range Count: 1 00:18:03.646 NGUID/EUI64 Never Reused: No 00:18:03.646 Namespace Write Protected: No 00:18:03.646 Number of LBA Formats: 1 00:18:03.646 Current LBA Format: LBA Format #00 00:18:03.646 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:03.646 00:18:03.646 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:18:03.646 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:03.646 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.646 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:03.646 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.646 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:03.646 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:18:03.646 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:03.646 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:18:03.646 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:03.646 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:18:03.646 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:03.646 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:03.646 rmmod nvme_tcp 00:18:03.646 rmmod nvme_fabrics 00:18:03.905 rmmod nvme_keyring 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 90151 ']' 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 90151 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 90151 ']' 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 90151 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90151 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.905 killing process with pid 90151 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90151' 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 90151 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 90151 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:03.905 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:18:04.163 ************************************ 00:18:04.163 END TEST nvmf_identify 00:18:04.163 ************************************ 00:18:04.163 00:18:04.163 real 0m2.154s 00:18:04.163 user 0m4.471s 00:18:04.163 sys 0m0.663s 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.163 16:53:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.163 ************************************ 00:18:04.163 START TEST nvmf_perf 00:18:04.163 ************************************ 00:18:04.164 16:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:04.423 * Looking for test storage... 00:18:04.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:04.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.423 --rc genhtml_branch_coverage=1 00:18:04.423 --rc genhtml_function_coverage=1 00:18:04.423 --rc genhtml_legend=1 00:18:04.423 --rc geninfo_all_blocks=1 00:18:04.423 --rc geninfo_unexecuted_blocks=1 00:18:04.423 00:18:04.423 ' 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:04.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.423 --rc genhtml_branch_coverage=1 00:18:04.423 --rc genhtml_function_coverage=1 00:18:04.423 --rc genhtml_legend=1 00:18:04.423 --rc geninfo_all_blocks=1 00:18:04.423 --rc geninfo_unexecuted_blocks=1 00:18:04.423 00:18:04.423 ' 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:04.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.423 --rc genhtml_branch_coverage=1 00:18:04.423 --rc genhtml_function_coverage=1 00:18:04.423 --rc genhtml_legend=1 00:18:04.423 --rc geninfo_all_blocks=1 00:18:04.423 --rc geninfo_unexecuted_blocks=1 00:18:04.423 00:18:04.423 ' 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:04.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.423 --rc genhtml_branch_coverage=1 00:18:04.423 --rc genhtml_function_coverage=1 00:18:04.423 --rc genhtml_legend=1 00:18:04.423 --rc geninfo_all_blocks=1 00:18:04.423 --rc geninfo_unexecuted_blocks=1 00:18:04.423 00:18:04.423 ' 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.423 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:04.424 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:04.424 Cannot find device "nvmf_init_br" 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:04.424 Cannot find device "nvmf_init_br2" 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:04.424 Cannot find device "nvmf_tgt_br" 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:04.424 Cannot find device "nvmf_tgt_br2" 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:18:04.424 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:04.682 Cannot find device "nvmf_init_br" 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:04.682 Cannot find device "nvmf_init_br2" 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:04.682 Cannot find device "nvmf_tgt_br" 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:04.682 Cannot find device "nvmf_tgt_br2" 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:04.682 Cannot find device "nvmf_br" 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:04.682 Cannot find device "nvmf_init_if" 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:04.682 Cannot find device "nvmf_init_if2" 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:04.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:04.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:04.682 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:04.940 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:04.940 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:18:04.940 00:18:04.940 --- 10.0.0.3 ping statistics --- 00:18:04.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.940 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:04.940 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:04.940 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:18:04.940 00:18:04.940 --- 10.0.0.4 ping statistics --- 00:18:04.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.940 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:04.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:04.940 00:18:04.940 --- 10.0.0.1 ping statistics --- 00:18:04.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.940 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:04.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:18:04.940 00:18:04.940 --- 10.0.0.2 ping statistics --- 00:18:04.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.940 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:04.940 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:04.941 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:04.941 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=90399 00:18:04.941 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:04.941 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 90399 00:18:04.941 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 90399 ']' 00:18:04.941 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.941 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.941 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.941 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.941 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:04.941 [2024-11-29 16:53:28.623958] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:04.941 [2024-11-29 16:53:28.624065] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.199 [2024-11-29 16:53:28.745035] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:05.199 [2024-11-29 16:53:28.769807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:05.199 [2024-11-29 16:53:28.788327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.199 [2024-11-29 16:53:28.788396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.199 [2024-11-29 16:53:28.788406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.199 [2024-11-29 16:53:28.788413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.199 [2024-11-29 16:53:28.788419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.199 [2024-11-29 16:53:28.789143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.199 [2024-11-29 16:53:28.789291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.199 [2024-11-29 16:53:28.789857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.199 [2024-11-29 16:53:28.789906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.199 [2024-11-29 16:53:28.818530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:05.199 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.199 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:18:05.199 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:05.199 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:05.199 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:05.199 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.199 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:18:05.199 16:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:05.765 16:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:18:05.765 16:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:06.023 16:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:18:06.023 16:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:06.281 16:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:06.281 16:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:18:06.281 16:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:06.281 16:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:06.281 16:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:06.541 [2024-11-29 16:53:30.164930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.541 16:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:06.800 16:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:06.800 16:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:07.059 16:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:07.059 16:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:07.318 16:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:07.577 [2024-11-29 16:53:31.222191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:07.577 16:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:07.835 16:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:07.836 16:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:07.836 16:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:07.836 16:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:09.211 Initializing NVMe Controllers 00:18:09.211 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:09.211 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:09.211 Initialization complete. Launching workers. 00:18:09.211 ======================================================== 00:18:09.211 Latency(us) 00:18:09.211 Device Information : IOPS MiB/s Average min max 00:18:09.211 PCIE (0000:00:10.0) NSID 1 from core 0: 22045.09 86.11 1451.48 316.83 7904.69 00:18:09.211 ======================================================== 00:18:09.211 Total : 22045.09 86.11 1451.48 316.83 7904.69 00:18:09.211 00:18:09.211 16:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:10.148 Initializing NVMe Controllers 00:18:10.148 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:10.148 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:10.148 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:10.148 Initialization complete. Launching workers. 00:18:10.148 ======================================================== 00:18:10.148 Latency(us) 00:18:10.148 Device Information : IOPS MiB/s Average min max 00:18:10.148 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4062.98 15.87 245.77 93.98 7322.71 00:18:10.148 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8112.28 5924.98 14183.53 00:18:10.148 ======================================================== 00:18:10.148 Total : 4186.98 16.36 478.74 93.98 14183.53 00:18:10.148 00:18:10.407 16:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:11.805 Initializing NVMe Controllers 00:18:11.805 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:11.805 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:11.805 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:11.805 Initialization complete. Launching workers. 00:18:11.805 ======================================================== 00:18:11.805 Latency(us) 00:18:11.805 Device Information : IOPS MiB/s Average min max 00:18:11.805 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8601.64 33.60 3719.61 516.71 9416.15 00:18:11.805 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3962.79 15.48 8120.57 6204.44 16862.16 00:18:11.805 ======================================================== 00:18:11.805 Total : 12564.43 49.08 5107.66 516.71 16862.16 00:18:11.805 00:18:11.805 16:53:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:11.805 16:53:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:14.345 Initializing NVMe Controllers 00:18:14.345 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:14.345 Controller IO queue size 128, less than required. 00:18:14.345 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:14.345 Controller IO queue size 128, less than required. 00:18:14.345 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:14.345 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:14.345 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:14.345 Initialization complete. Launching workers. 00:18:14.345 ======================================================== 00:18:14.345 Latency(us) 00:18:14.345 Device Information : IOPS MiB/s Average min max 00:18:14.345 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1936.45 484.11 67272.52 37346.52 113171.97 00:18:14.345 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 699.76 174.94 190927.90 50283.68 305047.99 00:18:14.345 ======================================================== 00:18:14.345 Total : 2636.21 659.05 100095.79 37346.52 305047.99 00:18:14.345 00:18:14.345 16:53:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:18:14.604 Initializing NVMe Controllers 00:18:14.604 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:14.604 Controller IO queue size 128, less than required. 00:18:14.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:14.604 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:14.604 Controller IO queue size 128, less than required. 00:18:14.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:14.604 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:14.604 WARNING: Some requested NVMe devices were skipped 00:18:14.604 No valid NVMe controllers or AIO or URING devices found 00:18:14.604 16:53:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:17.140 Initializing NVMe Controllers 00:18:17.140 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:17.140 Controller IO queue size 128, less than required. 00:18:17.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:17.140 Controller IO queue size 128, less than required. 00:18:17.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:17.140 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:17.140 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:17.140 Initialization complete. Launching workers. 00:18:17.140 00:18:17.140 ==================== 00:18:17.140 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:17.140 TCP transport: 00:18:17.140 polls: 10560 00:18:17.140 idle_polls: 5660 00:18:17.140 sock_completions: 4900 00:18:17.140 nvme_completions: 7009 00:18:17.140 submitted_requests: 10378 00:18:17.140 queued_requests: 1 00:18:17.140 00:18:17.140 ==================== 00:18:17.140 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:17.140 TCP transport: 00:18:17.140 polls: 10492 00:18:17.140 idle_polls: 5981 00:18:17.140 sock_completions: 4511 00:18:17.140 nvme_completions: 6983 00:18:17.140 submitted_requests: 10368 00:18:17.140 queued_requests: 1 00:18:17.140 ======================================================== 00:18:17.140 Latency(us) 00:18:17.140 Device Information : IOPS MiB/s Average min max 00:18:17.140 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1751.64 437.91 75056.35 44255.00 111411.07 00:18:17.140 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1745.14 436.28 73906.91 25522.57 129134.73 00:18:17.140 ======================================================== 00:18:17.140 Total : 3496.78 874.19 74482.70 25522.57 129134.73 00:18:17.140 00:18:17.140 16:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:17.140 16:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.399 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:18:17.399 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:18:17.399 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:18:17.967 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=42e26d9d-10c2-4b28-9b86-0f387bc1fe0e 00:18:17.967 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 42e26d9d-10c2-4b28-9b86-0f387bc1fe0e 00:18:17.967 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=42e26d9d-10c2-4b28-9b86-0f387bc1fe0e 00:18:17.967 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:18:17.967 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:18:17.967 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:18:17.967 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:17.967 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:18:17.967 { 00:18:17.967 "uuid": "42e26d9d-10c2-4b28-9b86-0f387bc1fe0e", 00:18:17.967 "name": "lvs_0", 00:18:17.967 "base_bdev": "Nvme0n1", 00:18:17.967 "total_data_clusters": 1278, 00:18:17.967 "free_clusters": 1278, 00:18:17.967 "block_size": 4096, 00:18:17.967 "cluster_size": 4194304 00:18:17.967 } 00:18:17.967 ]' 00:18:17.967 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="42e26d9d-10c2-4b28-9b86-0f387bc1fe0e") .free_clusters' 00:18:18.226 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:18:18.226 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="42e26d9d-10c2-4b28-9b86-0f387bc1fe0e") .cluster_size' 00:18:18.226 5112 00:18:18.226 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:18:18.226 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:18:18.226 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:18:18.226 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:18:18.226 16:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 42e26d9d-10c2-4b28-9b86-0f387bc1fe0e lbd_0 5112 00:18:18.485 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=6630fde5-a4a4-4b67-b2c9-ab9903441987 00:18:18.485 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 6630fde5-a4a4-4b67-b2c9-ab9903441987 lvs_n_0 00:18:18.744 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=29fa5aa2-b301-4254-bb1b-a7762bdd2a5b 00:18:18.744 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 29fa5aa2-b301-4254-bb1b-a7762bdd2a5b 00:18:18.744 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=29fa5aa2-b301-4254-bb1b-a7762bdd2a5b 00:18:18.744 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:18:18.744 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:18:18.744 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:18:18.744 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:19.002 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:18:19.002 { 00:18:19.002 "uuid": "42e26d9d-10c2-4b28-9b86-0f387bc1fe0e", 00:18:19.002 "name": "lvs_0", 00:18:19.002 "base_bdev": "Nvme0n1", 00:18:19.002 "total_data_clusters": 1278, 00:18:19.002 "free_clusters": 0, 00:18:19.002 "block_size": 4096, 00:18:19.002 "cluster_size": 4194304 00:18:19.002 }, 00:18:19.002 { 00:18:19.002 "uuid": "29fa5aa2-b301-4254-bb1b-a7762bdd2a5b", 00:18:19.002 "name": "lvs_n_0", 00:18:19.002 "base_bdev": "6630fde5-a4a4-4b67-b2c9-ab9903441987", 00:18:19.002 "total_data_clusters": 1276, 00:18:19.002 "free_clusters": 1276, 00:18:19.002 "block_size": 4096, 00:18:19.002 "cluster_size": 4194304 00:18:19.002 } 00:18:19.002 ]' 00:18:19.002 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="29fa5aa2-b301-4254-bb1b-a7762bdd2a5b") .free_clusters' 00:18:19.261 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:18:19.261 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="29fa5aa2-b301-4254-bb1b-a7762bdd2a5b") .cluster_size' 00:18:19.261 5104 00:18:19.261 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:18:19.261 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:18:19.261 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:18:19.261 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:18:19.261 16:53:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 29fa5aa2-b301-4254-bb1b-a7762bdd2a5b lbd_nest_0 5104 00:18:19.519 16:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=2ed09c8a-cf71-484f-bc99-d6cb2336c05b 00:18:19.519 16:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:19.778 16:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:18:19.778 16:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 2ed09c8a-cf71-484f-bc99-d6cb2336c05b 00:18:20.037 16:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:20.296 16:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:18:20.296 16:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:18:20.296 16:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:20.296 16:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:20.296 16:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:20.555 Initializing NVMe Controllers 00:18:20.555 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:20.555 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:20.555 WARNING: Some requested NVMe devices were skipped 00:18:20.555 No valid NVMe controllers or AIO or URING devices found 00:18:20.555 16:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:20.555 16:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:32.760 Initializing NVMe Controllers 00:18:32.760 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:32.760 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:32.760 Initialization complete. Launching workers. 00:18:32.760 ======================================================== 00:18:32.760 Latency(us) 00:18:32.760 Device Information : IOPS MiB/s Average min max 00:18:32.760 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 946.60 118.32 1056.06 324.05 8246.66 00:18:32.760 ======================================================== 00:18:32.760 Total : 946.60 118.32 1056.06 324.05 8246.66 00:18:32.760 00:18:32.760 16:53:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:32.760 16:53:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:32.760 16:53:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:32.760 Initializing NVMe Controllers 00:18:32.760 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:32.760 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:32.760 WARNING: Some requested NVMe devices were skipped 00:18:32.760 No valid NVMe controllers or AIO or URING devices found 00:18:32.760 16:53:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:32.760 16:53:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:42.739 Initializing NVMe Controllers 00:18:42.739 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:42.739 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:42.739 Initialization complete. Launching workers. 00:18:42.739 ======================================================== 00:18:42.739 Latency(us) 00:18:42.739 Device Information : IOPS MiB/s Average min max 00:18:42.739 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1357.60 169.70 23607.76 5271.59 63746.81 00:18:42.739 ======================================================== 00:18:42.739 Total : 1357.60 169.70 23607.76 5271.59 63746.81 00:18:42.739 00:18:42.739 16:54:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:42.739 16:54:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:42.739 16:54:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:42.739 Initializing NVMe Controllers 00:18:42.739 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:42.739 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:42.739 WARNING: Some requested NVMe devices were skipped 00:18:42.739 No valid NVMe controllers or AIO or URING devices found 00:18:42.739 16:54:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:42.739 16:54:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:52.719 Initializing NVMe Controllers 00:18:52.719 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:52.719 Controller IO queue size 128, less than required. 00:18:52.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:52.719 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:52.719 Initialization complete. Launching workers. 00:18:52.719 ======================================================== 00:18:52.719 Latency(us) 00:18:52.719 Device Information : IOPS MiB/s Average min max 00:18:52.719 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4061.50 507.69 31584.02 7674.26 72112.39 00:18:52.719 ======================================================== 00:18:52.719 Total : 4061.50 507.69 31584.02 7674.26 72112.39 00:18:52.719 00:18:52.719 16:54:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:52.719 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2ed09c8a-cf71-484f-bc99-d6cb2336c05b 00:18:52.978 16:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:53.546 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6630fde5-a4a4-4b67-b2c9-ab9903441987 00:18:53.805 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:54.064 rmmod nvme_tcp 00:18:54.064 rmmod nvme_fabrics 00:18:54.064 rmmod nvme_keyring 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 90399 ']' 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 90399 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 90399 ']' 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 90399 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90399 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:54.064 killing process with pid 90399 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90399' 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 90399 00:18:54.064 16:54:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 90399 00:18:55.442 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:55.442 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:55.442 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:55.442 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:55.442 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:18:55.442 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:55.442 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:18:55.442 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:55.442 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:55.442 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:55.442 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:55.443 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:55.443 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:55.443 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:55.443 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:55.443 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:55.443 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:55.443 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:55.443 16:54:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:55.443 ************************************ 00:18:55.443 END TEST nvmf_perf 00:18:55.443 ************************************ 00:18:55.443 00:18:55.443 real 0m51.172s 00:18:55.443 user 3m13.158s 00:18:55.443 sys 0m12.587s 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.443 ************************************ 00:18:55.443 START TEST nvmf_fio_host 00:18:55.443 ************************************ 00:18:55.443 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:55.443 * Looking for test storage... 00:18:55.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.703 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:55.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.704 --rc genhtml_branch_coverage=1 00:18:55.704 --rc genhtml_function_coverage=1 00:18:55.704 --rc genhtml_legend=1 00:18:55.704 --rc geninfo_all_blocks=1 00:18:55.704 --rc geninfo_unexecuted_blocks=1 00:18:55.704 00:18:55.704 ' 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:55.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.704 --rc genhtml_branch_coverage=1 00:18:55.704 --rc genhtml_function_coverage=1 00:18:55.704 --rc genhtml_legend=1 00:18:55.704 --rc geninfo_all_blocks=1 00:18:55.704 --rc geninfo_unexecuted_blocks=1 00:18:55.704 00:18:55.704 ' 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:55.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.704 --rc genhtml_branch_coverage=1 00:18:55.704 --rc genhtml_function_coverage=1 00:18:55.704 --rc genhtml_legend=1 00:18:55.704 --rc geninfo_all_blocks=1 00:18:55.704 --rc geninfo_unexecuted_blocks=1 00:18:55.704 00:18:55.704 ' 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:55.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.704 --rc genhtml_branch_coverage=1 00:18:55.704 --rc genhtml_function_coverage=1 00:18:55.704 --rc genhtml_legend=1 00:18:55.704 --rc geninfo_all_blocks=1 00:18:55.704 --rc geninfo_unexecuted_blocks=1 00:18:55.704 00:18:55.704 ' 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:55.704 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:55.705 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:55.705 Cannot find device "nvmf_init_br" 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:55.705 Cannot find device "nvmf_init_br2" 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:55.705 Cannot find device "nvmf_tgt_br" 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:55.705 Cannot find device "nvmf_tgt_br2" 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:55.705 Cannot find device "nvmf_init_br" 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:55.705 Cannot find device "nvmf_init_br2" 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:55.705 Cannot find device "nvmf_tgt_br" 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:55.705 Cannot find device "nvmf_tgt_br2" 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:55.705 Cannot find device "nvmf_br" 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:55.705 Cannot find device "nvmf_init_if" 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:55.705 Cannot find device "nvmf_init_if2" 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:55.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:55.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:55.705 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:55.965 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:55.965 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:55.965 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:18:55.966 00:18:55.966 --- 10.0.0.3 ping statistics --- 00:18:55.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.966 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:55.966 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:55.966 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:55.966 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:18:55.966 00:18:55.966 --- 10.0.0.4 ping statistics --- 00:18:55.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.966 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:55.966 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:56.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:56.224 00:18:56.224 --- 10.0.0.1 ping statistics --- 00:18:56.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.224 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:56.224 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:56.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:18:56.224 00:18:56.224 --- 10.0.0.2 ping statistics --- 00:18:56.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.224 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:56.224 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.224 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=91260 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 91260 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 91260 ']' 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.225 16:54:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.225 [2024-11-29 16:54:19.851984] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:18:56.225 [2024-11-29 16:54:19.852786] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.225 [2024-11-29 16:54:19.984810] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:56.484 [2024-11-29 16:54:20.017207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.484 [2024-11-29 16:54:20.041224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.484 [2024-11-29 16:54:20.041289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.484 [2024-11-29 16:54:20.041305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.484 [2024-11-29 16:54:20.041315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.484 [2024-11-29 16:54:20.041338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.484 [2024-11-29 16:54:20.042204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.484 [2024-11-29 16:54:20.042357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.484 [2024-11-29 16:54:20.042449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.484 [2024-11-29 16:54:20.042459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.484 [2024-11-29 16:54:20.076679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:57.051 16:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.051 16:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:18:57.051 16:54:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:57.309 [2024-11-29 16:54:21.039347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.309 16:54:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:57.309 16:54:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:57.309 16:54:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.309 16:54:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:57.874 Malloc1 00:18:57.874 16:54:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:58.132 16:54:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:58.391 16:54:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:58.391 [2024-11-29 16:54:22.168632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:58.651 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:58.651 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:58.651 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:58.651 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:58.651 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:58.651 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:58.651 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:58.651 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:58.651 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:58.651 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:58.651 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.651 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:58.651 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:58.651 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:58.910 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:58.910 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:58.910 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.910 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:58.910 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:58.910 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:58.910 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:58.910 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:58.910 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:58.910 16:54:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:58.910 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:58.910 fio-3.35 00:18:58.910 Starting 1 thread 00:19:01.439 00:19:01.439 test: (groupid=0, jobs=1): err= 0: pid=91337: Fri Nov 29 16:54:24 2024 00:19:01.439 read: IOPS=9013, BW=35.2MiB/s (36.9MB/s)(70.7MiB/2007msec) 00:19:01.439 slat (nsec): min=1752, max=290709, avg=2349.72, stdev=2928.06 00:19:01.439 clat (usec): min=2044, max=13463, avg=7385.31, stdev=579.80 00:19:01.439 lat (usec): min=2075, max=13465, avg=7387.66, stdev=579.55 00:19:01.439 clat percentiles (usec): 00:19:01.439 | 1.00th=[ 6259], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 6915], 00:19:01.439 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:19:01.439 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8291], 00:19:01.439 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[11469], 99.95th=[12518], 00:19:01.439 | 99.99th=[13435] 00:19:01.439 bw ( KiB/s): min=35088, max=36504, per=99.95%, avg=36036.00, stdev=642.94, samples=4 00:19:01.439 iops : min= 8772, max= 9126, avg=9009.00, stdev=160.74, samples=4 00:19:01.439 write: IOPS=9032, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2007msec); 0 zone resets 00:19:01.439 slat (nsec): min=1830, max=186569, avg=2404.21, stdev=2061.64 00:19:01.439 clat (usec): min=1928, max=12549, avg=6736.88, stdev=515.02 00:19:01.439 lat (usec): min=1940, max=12552, avg=6739.28, stdev=514.88 00:19:01.439 clat percentiles (usec): 00:19:01.439 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6325], 00:19:01.439 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:19:01.439 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7570], 00:19:01.439 | 99.00th=[ 7898], 99.50th=[ 8094], 99.90th=[10683], 99.95th=[11338], 00:19:01.439 | 99.99th=[12518] 00:19:01.439 bw ( KiB/s): min=35416, max=37240, per=100.00%, avg=36146.00, stdev=851.70, samples=4 00:19:01.439 iops : min= 8854, max= 9310, avg=9036.50, stdev=212.92, samples=4 00:19:01.439 lat (msec) : 2=0.01%, 4=0.15%, 10=99.67%, 20=0.18% 00:19:01.439 cpu : usr=72.98%, sys=19.84%, ctx=21, majf=0, minf=4 00:19:01.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:01.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:01.439 issued rwts: total=18090,18128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:01.439 00:19:01.439 Run status group 0 (all jobs): 00:19:01.439 READ: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.7MiB (74.1MB), run=2007-2007msec 00:19:01.439 WRITE: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.3MB), run=2007-2007msec 00:19:01.439 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:01.439 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:01.439 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:01.439 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:01.440 16:54:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:01.440 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:01.440 fio-3.35 00:19:01.440 Starting 1 thread 00:19:04.032 00:19:04.032 test: (groupid=0, jobs=1): err= 0: pid=91386: Fri Nov 29 16:54:27 2024 00:19:04.032 read: IOPS=8485, BW=133MiB/s (139MB/s)(266MiB/2007msec) 00:19:04.032 slat (usec): min=2, max=119, avg= 3.67, stdev= 2.28 00:19:04.032 clat (usec): min=1719, max=17213, avg=8455.76, stdev=2622.03 00:19:04.032 lat (usec): min=1722, max=17216, avg=8459.43, stdev=2622.12 00:19:04.032 clat percentiles (usec): 00:19:04.032 | 1.00th=[ 4015], 5.00th=[ 4686], 10.00th=[ 5211], 20.00th=[ 6063], 00:19:04.032 | 30.00th=[ 6849], 40.00th=[ 7570], 50.00th=[ 8225], 60.00th=[ 8848], 00:19:04.032 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11863], 95.00th=[13173], 00:19:04.032 | 99.00th=[15795], 99.50th=[16188], 99.90th=[17171], 99.95th=[17171], 00:19:04.032 | 99.99th=[17171] 00:19:04.032 bw ( KiB/s): min=63936, max=70272, per=50.30%, avg=68296.00, stdev=2936.61, samples=4 00:19:04.032 iops : min= 3996, max= 4392, avg=4268.50, stdev=183.54, samples=4 00:19:04.032 write: IOPS=4734, BW=74.0MiB/s (77.6MB/s)(139MiB/1878msec); 0 zone resets 00:19:04.032 slat (usec): min=31, max=348, avg=37.82, stdev= 9.47 00:19:04.032 clat (usec): min=5789, max=19366, avg=11858.11, stdev=2327.61 00:19:04.032 lat (usec): min=5822, max=19407, avg=11895.92, stdev=2328.82 00:19:04.032 clat percentiles (usec): 00:19:04.032 | 1.00th=[ 7439], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:19:04.032 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[12125], 00:19:04.032 | 70.00th=[12911], 80.00th=[13960], 90.00th=[15139], 95.00th=[16057], 00:19:04.033 | 99.00th=[17957], 99.50th=[18482], 99.90th=[18744], 99.95th=[19006], 00:19:04.033 | 99.99th=[19268] 00:19:04.033 bw ( KiB/s): min=68000, max=72896, per=93.23%, avg=70632.00, stdev=2090.62, samples=4 00:19:04.033 iops : min= 4250, max= 4556, avg=4414.50, stdev=130.66, samples=4 00:19:04.033 lat (msec) : 2=0.01%, 4=0.63%, 10=55.51%, 20=43.85% 00:19:04.033 cpu : usr=82.10%, sys=13.46%, ctx=7, majf=0, minf=12 00:19:04.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:04.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:04.033 issued rwts: total=17031,8892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:04.033 00:19:04.033 Run status group 0 (all jobs): 00:19:04.033 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=266MiB (279MB), run=2007-2007msec 00:19:04.033 WRITE: bw=74.0MiB/s (77.6MB/s), 74.0MiB/s-74.0MiB/s (77.6MB/s-77.6MB/s), io=139MiB (146MB), run=1878-1878msec 00:19:04.033 16:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.033 16:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:19:04.033 16:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:19:04.033 16:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:19:04.033 16:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:19:04.033 16:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:19:04.033 16:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:04.033 16:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:04.033 16:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:19:04.291 16:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:19:04.291 16:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:04.291 16:54:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:19:04.550 Nvme0n1 00:19:04.550 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:19:04.808 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=8f239658-0ea5-49d5-922c-2dd189b6db17 00:19:04.808 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 8f239658-0ea5-49d5-922c-2dd189b6db17 00:19:04.808 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=8f239658-0ea5-49d5-922c-2dd189b6db17 00:19:04.808 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:19:04.808 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:19:04.808 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:19:04.808 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:05.066 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:19:05.066 { 00:19:05.066 "uuid": "8f239658-0ea5-49d5-922c-2dd189b6db17", 00:19:05.066 "name": "lvs_0", 00:19:05.066 "base_bdev": "Nvme0n1", 00:19:05.066 "total_data_clusters": 4, 00:19:05.066 "free_clusters": 4, 00:19:05.066 "block_size": 4096, 00:19:05.066 "cluster_size": 1073741824 00:19:05.066 } 00:19:05.066 ]' 00:19:05.066 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="8f239658-0ea5-49d5-922c-2dd189b6db17") .free_clusters' 00:19:05.066 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:19:05.066 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="8f239658-0ea5-49d5-922c-2dd189b6db17") .cluster_size' 00:19:05.066 4096 00:19:05.066 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:19:05.067 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:19:05.067 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:19:05.067 16:54:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:19:05.325 21f1e1ea-886a-4793-812d-d1f8050accee 00:19:05.325 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:19:05.583 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:19:05.841 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:19:06.099 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:06.099 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:06.099 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:06.099 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:06.099 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:06.099 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:06.099 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:06.099 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:06.099 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:06.099 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:06.099 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:06.099 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:06.100 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:06.100 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:06.100 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:06.100 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:06.100 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:06.100 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:06.100 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:06.100 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:06.100 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:06.100 16:54:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:06.358 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:06.358 fio-3.35 00:19:06.358 Starting 1 thread 00:19:08.889 00:19:08.889 test: (groupid=0, jobs=1): err= 0: pid=91496: Fri Nov 29 16:54:32 2024 00:19:08.889 read: IOPS=6184, BW=24.2MiB/s (25.3MB/s)(48.5MiB/2009msec) 00:19:08.889 slat (usec): min=2, max=329, avg= 2.81, stdev= 3.94 00:19:08.889 clat (usec): min=3026, max=18579, avg=10817.75, stdev=870.25 00:19:08.889 lat (usec): min=3035, max=18581, avg=10820.56, stdev=869.92 00:19:08.889 clat percentiles (usec): 00:19:08.889 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:19:08.889 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:19:08.889 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:19:08.889 | 99.00th=[12649], 99.50th=[13173], 99.90th=[16057], 99.95th=[17433], 00:19:08.889 | 99.99th=[18482] 00:19:08.889 bw ( KiB/s): min=23696, max=25258, per=99.84%, avg=24696.50, stdev=701.77, samples=4 00:19:08.889 iops : min= 5924, max= 6314, avg=6174.00, stdev=175.31, samples=4 00:19:08.889 write: IOPS=6169, BW=24.1MiB/s (25.3MB/s)(48.4MiB/2009msec); 0 zone resets 00:19:08.889 slat (usec): min=2, max=246, avg= 2.81, stdev= 2.79 00:19:08.889 clat (usec): min=2388, max=18640, avg=9806.99, stdev=848.21 00:19:08.889 lat (usec): min=2401, max=18643, avg=9809.80, stdev=848.03 00:19:08.889 clat percentiles (usec): 00:19:08.889 | 1.00th=[ 8094], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:19:08.889 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:19:08.889 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:19:08.889 | 99.00th=[11600], 99.50th=[11994], 99.90th=[16188], 99.95th=[17433], 00:19:08.889 | 99.99th=[18482] 00:19:08.889 bw ( KiB/s): min=24392, max=24796, per=99.83%, avg=24635.00, stdev=174.09, samples=4 00:19:08.889 iops : min= 6098, max= 6199, avg=6158.75, stdev=43.52, samples=4 00:19:08.889 lat (msec) : 4=0.06%, 10=37.51%, 20=62.43% 00:19:08.889 cpu : usr=74.80%, sys=19.97%, ctx=13, majf=0, minf=20 00:19:08.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:08.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.889 issued rwts: total=12424,12394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.889 00:19:08.889 Run status group 0 (all jobs): 00:19:08.889 READ: bw=24.2MiB/s (25.3MB/s), 24.2MiB/s-24.2MiB/s (25.3MB/s-25.3MB/s), io=48.5MiB (50.9MB), run=2009-2009msec 00:19:08.889 WRITE: bw=24.1MiB/s (25.3MB/s), 24.1MiB/s-24.1MiB/s (25.3MB/s-25.3MB/s), io=48.4MiB (50.8MB), run=2009-2009msec 00:19:08.889 16:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:08.889 16:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:19:09.147 16:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=4b04b3c8-fe94-427f-937e-107ed8ceb5fb 00:19:09.147 16:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 4b04b3c8-fe94-427f-937e-107ed8ceb5fb 00:19:09.147 16:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=4b04b3c8-fe94-427f-937e-107ed8ceb5fb 00:19:09.147 16:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:19:09.147 16:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:19:09.147 16:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:19:09.147 16:54:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:09.405 16:54:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:19:09.405 { 00:19:09.405 "uuid": "8f239658-0ea5-49d5-922c-2dd189b6db17", 00:19:09.405 "name": "lvs_0", 00:19:09.405 "base_bdev": "Nvme0n1", 00:19:09.405 "total_data_clusters": 4, 00:19:09.405 "free_clusters": 0, 00:19:09.405 "block_size": 4096, 00:19:09.405 "cluster_size": 1073741824 00:19:09.405 }, 00:19:09.405 { 00:19:09.405 "uuid": "4b04b3c8-fe94-427f-937e-107ed8ceb5fb", 00:19:09.405 "name": "lvs_n_0", 00:19:09.405 "base_bdev": "21f1e1ea-886a-4793-812d-d1f8050accee", 00:19:09.405 "total_data_clusters": 1022, 00:19:09.405 "free_clusters": 1022, 00:19:09.405 "block_size": 4096, 00:19:09.405 "cluster_size": 4194304 00:19:09.405 } 00:19:09.405 ]' 00:19:09.405 16:54:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="4b04b3c8-fe94-427f-937e-107ed8ceb5fb") .free_clusters' 00:19:09.676 16:54:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:19:09.677 16:54:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="4b04b3c8-fe94-427f-937e-107ed8ceb5fb") .cluster_size' 00:19:09.677 4088 00:19:09.677 16:54:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:19:09.677 16:54:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:19:09.677 16:54:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:19:09.677 16:54:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:19:09.934 34ed3a66-5b0f-40c2-ac3b-a943a7fc2180 00:19:09.934 16:54:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:19:10.192 16:54:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:19:10.192 16:54:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:10.758 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:10.759 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:10.759 16:54:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:10.759 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:10.759 fio-3.35 00:19:10.759 Starting 1 thread 00:19:13.288 00:19:13.288 test: (groupid=0, jobs=1): err= 0: pid=91574: Fri Nov 29 16:54:36 2024 00:19:13.288 read: IOPS=5449, BW=21.3MiB/s (22.3MB/s)(42.8MiB/2009msec) 00:19:13.288 slat (nsec): min=1986, max=298756, avg=2654.04, stdev=4024.18 00:19:13.288 clat (usec): min=3340, max=21823, avg=12328.72, stdev=1020.93 00:19:13.288 lat (usec): min=3375, max=21826, avg=12331.38, stdev=1020.54 00:19:13.288 clat percentiles (usec): 00:19:13.288 | 1.00th=[10028], 5.00th=[10814], 10.00th=[11207], 20.00th=[11600], 00:19:13.288 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:19:13.288 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:19:13.288 | 99.00th=[14484], 99.50th=[15139], 99.90th=[18482], 99.95th=[20317], 00:19:13.288 | 99.99th=[21890] 00:19:13.288 bw ( KiB/s): min=20944, max=22224, per=99.79%, avg=21754.00, stdev=568.52, samples=4 00:19:13.288 iops : min= 5236, max= 5556, avg=5438.50, stdev=142.13, samples=4 00:19:13.288 write: IOPS=5427, BW=21.2MiB/s (22.2MB/s)(42.6MiB/2009msec); 0 zone resets 00:19:13.288 slat (usec): min=2, max=282, avg= 2.77, stdev= 3.28 00:19:13.288 clat (usec): min=2439, max=18525, avg=11126.75, stdev=944.11 00:19:13.288 lat (usec): min=2453, max=18528, avg=11129.52, stdev=943.93 00:19:13.288 clat percentiles (usec): 00:19:13.288 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:19:13.288 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:19:13.288 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:19:13.288 | 99.00th=[13173], 99.50th=[13566], 99.90th=[17171], 99.95th=[18482], 00:19:13.288 | 99.99th=[18482] 00:19:13.288 bw ( KiB/s): min=21440, max=21912, per=99.94%, avg=21698.00, stdev=241.72, samples=4 00:19:13.288 iops : min= 5360, max= 5478, avg=5424.50, stdev=60.43, samples=4 00:19:13.288 lat (msec) : 4=0.05%, 10=4.78%, 20=95.12%, 50=0.05% 00:19:13.288 cpu : usr=76.25%, sys=19.37%, ctx=16, majf=0, minf=20 00:19:13.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:13.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:13.288 issued rwts: total=10949,10904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:13.288 00:19:13.288 Run status group 0 (all jobs): 00:19:13.288 READ: bw=21.3MiB/s (22.3MB/s), 21.3MiB/s-21.3MiB/s (22.3MB/s-22.3MB/s), io=42.8MiB (44.8MB), run=2009-2009msec 00:19:13.288 WRITE: bw=21.2MiB/s (22.2MB/s), 21.2MiB/s-21.2MiB/s (22.2MB/s-22.2MB/s), io=42.6MiB (44.7MB), run=2009-2009msec 00:19:13.288 16:54:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:13.288 16:54:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:19:13.288 16:54:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:19:13.856 16:54:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:19:13.856 16:54:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:19:14.115 16:54:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:14.466 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:15.402 rmmod nvme_tcp 00:19:15.402 rmmod nvme_fabrics 00:19:15.402 rmmod nvme_keyring 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 91260 ']' 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 91260 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 91260 ']' 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 91260 00:19:15.402 16:54:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:19:15.402 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.402 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91260 00:19:15.402 killing process with pid 91260 00:19:15.402 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.402 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.403 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91260' 00:19:15.403 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 91260 00:19:15.403 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 91260 00:19:15.403 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:15.403 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:15.403 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:15.403 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:19:15.403 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:19:15.403 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:15.403 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:19:15.403 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:15.403 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:15.403 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:15.403 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:19:15.662 00:19:15.662 real 0m20.262s 00:19:15.662 user 1m28.387s 00:19:15.662 sys 0m4.436s 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.662 16:54:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.662 ************************************ 00:19:15.662 END TEST nvmf_fio_host 00:19:15.662 ************************************ 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.923 ************************************ 00:19:15.923 START TEST nvmf_failover 00:19:15.923 ************************************ 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:15.923 * Looking for test storage... 00:19:15.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.923 --rc genhtml_branch_coverage=1 00:19:15.923 --rc genhtml_function_coverage=1 00:19:15.923 --rc genhtml_legend=1 00:19:15.923 --rc geninfo_all_blocks=1 00:19:15.923 --rc geninfo_unexecuted_blocks=1 00:19:15.923 00:19:15.923 ' 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.923 --rc genhtml_branch_coverage=1 00:19:15.923 --rc genhtml_function_coverage=1 00:19:15.923 --rc genhtml_legend=1 00:19:15.923 --rc geninfo_all_blocks=1 00:19:15.923 --rc geninfo_unexecuted_blocks=1 00:19:15.923 00:19:15.923 ' 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.923 --rc genhtml_branch_coverage=1 00:19:15.923 --rc genhtml_function_coverage=1 00:19:15.923 --rc genhtml_legend=1 00:19:15.923 --rc geninfo_all_blocks=1 00:19:15.923 --rc geninfo_unexecuted_blocks=1 00:19:15.923 00:19:15.923 ' 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:15.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.923 --rc genhtml_branch_coverage=1 00:19:15.923 --rc genhtml_function_coverage=1 00:19:15.923 --rc genhtml_legend=1 00:19:15.923 --rc geninfo_all_blocks=1 00:19:15.923 --rc geninfo_unexecuted_blocks=1 00:19:15.923 00:19:15.923 ' 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.923 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:15.924 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:15.924 Cannot find device "nvmf_init_br" 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:15.924 Cannot find device "nvmf_init_br2" 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:15.924 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:16.184 Cannot find device "nvmf_tgt_br" 00:19:16.184 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:16.184 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:16.184 Cannot find device "nvmf_tgt_br2" 00:19:16.184 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:16.184 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:16.184 Cannot find device "nvmf_init_br" 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:16.185 Cannot find device "nvmf_init_br2" 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:16.185 Cannot find device "nvmf_tgt_br" 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:16.185 Cannot find device "nvmf_tgt_br2" 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:16.185 Cannot find device "nvmf_br" 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:16.185 Cannot find device "nvmf_init_if" 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:16.185 Cannot find device "nvmf_init_if2" 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:16.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:16.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:16.185 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:16.444 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:16.444 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:16.444 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:16.444 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:16.444 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:16.444 16:54:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:16.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:16.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:19:16.444 00:19:16.444 --- 10.0.0.3 ping statistics --- 00:19:16.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.444 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:16.444 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:16.444 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:19:16.444 00:19:16.444 --- 10.0.0.4 ping statistics --- 00:19:16.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.444 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:16.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:16.444 00:19:16.444 --- 10.0.0.1 ping statistics --- 00:19:16.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.444 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:16.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:19:16.444 00:19:16.444 --- 10.0.0.2 ping statistics --- 00:19:16.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.444 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=91863 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 91863 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 91863 ']' 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.444 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:16.444 [2024-11-29 16:54:40.177002] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:16.444 [2024-11-29 16:54:40.177094] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.703 [2024-11-29 16:54:40.309131] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:16.703 [2024-11-29 16:54:40.335883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:16.703 [2024-11-29 16:54:40.354097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.703 [2024-11-29 16:54:40.354164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.703 [2024-11-29 16:54:40.354190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.703 [2024-11-29 16:54:40.354197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.703 [2024-11-29 16:54:40.354203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.703 [2024-11-29 16:54:40.354994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.703 [2024-11-29 16:54:40.355083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.703 [2024-11-29 16:54:40.355087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.703 [2024-11-29 16:54:40.383074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:16.703 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.703 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:16.703 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:16.703 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:16.703 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:16.703 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.703 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:16.960 [2024-11-29 16:54:40.676260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.960 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:17.219 Malloc0 00:19:17.219 16:54:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:17.478 16:54:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:17.737 16:54:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:17.996 [2024-11-29 16:54:41.669680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:17.996 16:54:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:18.255 [2024-11-29 16:54:41.897895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:18.255 16:54:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:18.514 [2024-11-29 16:54:42.126110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:18.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.514 16:54:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:18.514 16:54:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=91909 00:19:18.514 16:54:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:18.514 16:54:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 91909 /var/tmp/bdevperf.sock 00:19:18.514 16:54:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 91909 ']' 00:19:18.514 16:54:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.514 16:54:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.514 16:54:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.514 16:54:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.514 16:54:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:18.772 16:54:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.772 16:54:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:18.772 16:54:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:19.031 NVMe0n1 00:19:19.031 16:54:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:19.290 00:19:19.290 16:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=91925 00:19:19.290 16:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:19.290 16:54:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:20.666 16:54:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:20.666 16:54:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:23.951 16:54:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:23.952 00:19:23.952 16:54:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:24.210 16:54:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:27.502 16:54:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:27.760 [2024-11-29 16:54:51.297007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:27.760 16:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:28.697 16:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:28.956 16:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 91925 00:19:35.524 { 00:19:35.524 "results": [ 00:19:35.524 { 00:19:35.524 "job": "NVMe0n1", 00:19:35.524 "core_mask": "0x1", 00:19:35.524 "workload": "verify", 00:19:35.524 "status": "finished", 00:19:35.524 "verify_range": { 00:19:35.524 "start": 0, 00:19:35.524 "length": 16384 00:19:35.524 }, 00:19:35.524 "queue_depth": 128, 00:19:35.524 "io_size": 4096, 00:19:35.524 "runtime": 15.007883, 00:19:35.524 "iops": 9805.780069047712, 00:19:35.524 "mibps": 38.30382839471763, 00:19:35.524 "io_failed": 3157, 00:19:35.524 "io_timeout": 0, 00:19:35.524 "avg_latency_us": 12749.570391507628, 00:19:35.524 "min_latency_us": 621.8472727272728, 00:19:35.524 "max_latency_us": 14120.02909090909 00:19:35.524 } 00:19:35.524 ], 00:19:35.524 "core_count": 1 00:19:35.524 } 00:19:35.524 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 91909 00:19:35.524 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 91909 ']' 00:19:35.524 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 91909 00:19:35.524 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:35.524 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.524 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91909 00:19:35.524 killing process with pid 91909 00:19:35.524 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.524 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.524 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91909' 00:19:35.524 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 91909 00:19:35.524 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 91909 00:19:35.524 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:35.524 [2024-11-29 16:54:42.185437] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:35.524 [2024-11-29 16:54:42.185538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91909 ] 00:19:35.524 [2024-11-29 16:54:42.306486] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:35.524 [2024-11-29 16:54:42.335866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.524 [2024-11-29 16:54:42.359449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.524 [2024-11-29 16:54:42.392018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:35.524 Running I/O for 15 seconds... 00:19:35.524 7445.00 IOPS, 29.08 MiB/s [2024-11-29T16:54:59.317Z] [2024-11-29 16:54:44.334650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.334705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.334763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.334778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.334794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.334807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.334822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.334835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.334849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.334862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.334876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.334889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.334904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.334916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.334931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.334944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.334958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.334972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.334986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.334999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.335466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.335495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.335523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.335550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.335578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.335606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.335634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.525 [2024-11-29 16:54:44.335662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.525 [2024-11-29 16:54:44.335932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.525 [2024-11-29 16:54:44.335945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.335961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.335974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.335989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.526 [2024-11-29 16:54:44.336813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.336973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.336987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.337002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.337015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.337030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.337043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.337058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.337071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.337086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.337099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.337113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.526 [2024-11-29 16:54:44.337126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.526 [2024-11-29 16:54:44.337141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.337242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.337270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.337298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.337340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.337375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.337404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.337432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.337459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.527 [2024-11-29 16:54:44.337867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.337895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.337924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.337951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.337979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.337994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.338007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.338022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.338035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.338050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.338063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.338084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.338098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.338115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.338129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.338144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.338157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.338172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.338185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.338200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.338213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.338228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.338241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.338258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.338271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.338286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.527 [2024-11-29 16:54:44.338300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.527 [2024-11-29 16:54:44.338314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.528 [2024-11-29 16:54:44.338353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:44.338371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.528 [2024-11-29 16:54:44.338385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:44.338400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.528 [2024-11-29 16:54:44.338414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:44.338429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.528 [2024-11-29 16:54:44.338442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:44.338457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.528 [2024-11-29 16:54:44.338478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:44.338495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.528 [2024-11-29 16:54:44.338508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:44.338524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.528 [2024-11-29 16:54:44.338537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:44.338553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.528 [2024-11-29 16:54:44.338566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:44.338618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.528 [2024-11-29 16:54:44.338633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.528 [2024-11-29 16:54:44.338647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70648 len:8 PRP1 0x0 PRP2 0x0 00:19:35.528 [2024-11-29 16:54:44.338661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:44.338718] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:35.528 [2024-11-29 16:54:44.338785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.528 [2024-11-29 16:54:44.338806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:44.338821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.528 [2024-11-29 16:54:44.338834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:44.338848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.528 [2024-11-29 16:54:44.338860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:44.338874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.528 [2024-11-29 16:54:44.338890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:44.338904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:35.528 [2024-11-29 16:54:44.338956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fd160 (9): Bad file descriptor 00:19:35.528 [2024-11-29 16:54:44.342639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:35.528 [2024-11-29 16:54:44.367073] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:35.528 8318.50 IOPS, 32.49 MiB/s [2024-11-29T16:54:59.320Z] 8855.67 IOPS, 34.59 MiB/s [2024-11-29T16:54:59.320Z] 9167.75 IOPS, 35.81 MiB/s [2024-11-29T16:54:59.320Z] [2024-11-29 16:54:47.942733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.942807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.942871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.942888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.942903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.942916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.942930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.942942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.942956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.942969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.942983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.942995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.943022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.943048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.943075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.943101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.943128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.943154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.943181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.943215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.943243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.528 [2024-11-29 16:54:47.943269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.528 [2024-11-29 16:54:47.943296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.528 [2024-11-29 16:54:47.943323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.528 [2024-11-29 16:54:47.943364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.528 [2024-11-29 16:54:47.943393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.528 [2024-11-29 16:54:47.943423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.528 [2024-11-29 16:54:47.943450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.528 [2024-11-29 16:54:47.943464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.529 [2024-11-29 16:54:47.943477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.529 [2024-11-29 16:54:47.943503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.943979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.943993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.944021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.944077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.944104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.529 [2024-11-29 16:54:47.944131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.529 [2024-11-29 16:54:47.944158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.529 [2024-11-29 16:54:47.944185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.529 [2024-11-29 16:54:47.944211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.529 [2024-11-29 16:54:47.944238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.529 [2024-11-29 16:54:47.944265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.529 [2024-11-29 16:54:47.944291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.529 [2024-11-29 16:54:47.944318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.529 [2024-11-29 16:54:47.944351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.529 [2024-11-29 16:54:47.944392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.529 [2024-11-29 16:54:47.944419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.529 [2024-11-29 16:54:47.944446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.529 [2024-11-29 16:54:47.944460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.944473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.944499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.944526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.944552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.944579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.944605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.944632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.944658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.944684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.944718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.944745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.944772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.944799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.944826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.944852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.944879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.944906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.944932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.944958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.944985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.944999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.945011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.945043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.945070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.945097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.945124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.945150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.945177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.945203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.945230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.945257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.530 [2024-11-29 16:54:47.945284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.945311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.945366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.945393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.945428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.945456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.945484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.945511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.945538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.530 [2024-11-29 16:54:47.945566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.530 [2024-11-29 16:54:47.945581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.531 [2024-11-29 16:54:47.945593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.945608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.531 [2024-11-29 16:54:47.945621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.945635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.531 [2024-11-29 16:54:47.945648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.945663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.531 [2024-11-29 16:54:47.945676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.945691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.531 [2024-11-29 16:54:47.945704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.945732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.531 [2024-11-29 16:54:47.945745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.945759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.531 [2024-11-29 16:54:47.945777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.945795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.531 [2024-11-29 16:54:47.945808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.945823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.531 [2024-11-29 16:54:47.945835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.945853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.531 [2024-11-29 16:54:47.945867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.945881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.531 [2024-11-29 16:54:47.945894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.945908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.531 [2024-11-29 16:54:47.945920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.945934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.531 [2024-11-29 16:54:47.945947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.945961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.531 [2024-11-29 16:54:47.945973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.945988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.531 [2024-11-29 16:54:47.946000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.531 [2024-11-29 16:54:47.946027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2120c90 is same with the state(6) to be set 00:19:35.531 [2024-11-29 16:54:47.946055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.531 [2024-11-29 16:54:47.946065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.531 [2024-11-29 16:54:47.946075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102304 len:8 PRP1 0x0 PRP2 0x0 00:19:35.531 [2024-11-29 16:54:47.946087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.531 [2024-11-29 16:54:47.946110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.531 [2024-11-29 16:54:47.946119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102824 len:8 PRP1 0x0 PRP2 0x0 00:19:35.531 [2024-11-29 16:54:47.946138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.531 [2024-11-29 16:54:47.946161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.531 [2024-11-29 16:54:47.946171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102832 len:8 PRP1 0x0 PRP2 0x0 00:19:35.531 [2024-11-29 16:54:47.946183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.531 [2024-11-29 16:54:47.946204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.531 [2024-11-29 16:54:47.946213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102840 len:8 PRP1 0x0 PRP2 0x0 00:19:35.531 [2024-11-29 16:54:47.946225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.531 [2024-11-29 16:54:47.946248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.531 [2024-11-29 16:54:47.946258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102848 len:8 PRP1 0x0 PRP2 0x0 00:19:35.531 [2024-11-29 16:54:47.946270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.531 [2024-11-29 16:54:47.946292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.531 [2024-11-29 16:54:47.946301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102856 len:8 PRP1 0x0 PRP2 0x0 00:19:35.531 [2024-11-29 16:54:47.946313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.531 [2024-11-29 16:54:47.946335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.531 [2024-11-29 16:54:47.946344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102864 len:8 PRP1 0x0 PRP2 0x0 00:19:35.531 [2024-11-29 16:54:47.946366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.531 [2024-11-29 16:54:47.946391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.531 [2024-11-29 16:54:47.946400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102872 len:8 PRP1 0x0 PRP2 0x0 00:19:35.531 [2024-11-29 16:54:47.946412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.531 [2024-11-29 16:54:47.946433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.531 [2024-11-29 16:54:47.946443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102880 len:8 PRP1 0x0 PRP2 0x0 00:19:35.531 [2024-11-29 16:54:47.946455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.531 [2024-11-29 16:54:47.946476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.531 [2024-11-29 16:54:47.946491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102888 len:8 PRP1 0x0 PRP2 0x0 00:19:35.531 [2024-11-29 16:54:47.946505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.531 [2024-11-29 16:54:47.946527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.531 [2024-11-29 16:54:47.946536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102896 len:8 PRP1 0x0 PRP2 0x0 00:19:35.531 [2024-11-29 16:54:47.946548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.531 [2024-11-29 16:54:47.946570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.531 [2024-11-29 16:54:47.946579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102904 len:8 PRP1 0x0 PRP2 0x0 00:19:35.531 [2024-11-29 16:54:47.946591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.531 [2024-11-29 16:54:47.946612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.531 [2024-11-29 16:54:47.946622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102912 len:8 PRP1 0x0 PRP2 0x0 00:19:35.531 [2024-11-29 16:54:47.946634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.531 [2024-11-29 16:54:47.946646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.531 [2024-11-29 16:54:47.946655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.531 [2024-11-29 16:54:47.946664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102920 len:8 PRP1 0x0 PRP2 0x0 00:19:35.531 [2024-11-29 16:54:47.946676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:47.946689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.532 [2024-11-29 16:54:47.946698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.532 [2024-11-29 16:54:47.946707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102928 len:8 PRP1 0x0 PRP2 0x0 00:19:35.532 [2024-11-29 16:54:47.946719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:47.946731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.532 [2024-11-29 16:54:47.946740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.532 [2024-11-29 16:54:47.946749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102936 len:8 PRP1 0x0 PRP2 0x0 00:19:35.532 [2024-11-29 16:54:47.946761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:47.946773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.532 [2024-11-29 16:54:47.946783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.532 [2024-11-29 16:54:47.946792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102944 len:8 PRP1 0x0 PRP2 0x0 00:19:35.532 [2024-11-29 16:54:47.946804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:47.946854] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:35.532 [2024-11-29 16:54:47.946906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-11-29 16:54:47.946926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:47.946944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-11-29 16:54:47.946957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:47.946970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-11-29 16:54:47.946982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:47.946995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-11-29 16:54:47.947007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:47.947019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:35.532 [2024-11-29 16:54:47.947051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fd160 (9): Bad file descriptor 00:19:35.532 [2024-11-29 16:54:47.950703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:35.532 [2024-11-29 16:54:47.973038] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:19:35.532 9279.40 IOPS, 36.25 MiB/s [2024-11-29T16:54:59.324Z] 9447.50 IOPS, 36.90 MiB/s [2024-11-29T16:54:59.324Z] 9547.00 IOPS, 37.29 MiB/s [2024-11-29T16:54:59.324Z] 9571.62 IOPS, 37.39 MiB/s [2024-11-29T16:54:59.324Z] 9629.89 IOPS, 37.62 MiB/s [2024-11-29T16:54:59.324Z] [2024-11-29 16:54:52.586613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-11-29 16:54:52.586670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.586705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-11-29 16:54:52.586719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.586731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-11-29 16:54:52.586745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.586758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.532 [2024-11-29 16:54:52.586770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.586783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fd160 is same with the state(6) to be set 00:19:35.532 [2024-11-29 16:54:52.587913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.587948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.587971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.588007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.588039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.588083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.588125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.588152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.588180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.588207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.532 [2024-11-29 16:54:52.588236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.532 [2024-11-29 16:54:52.588263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.532 [2024-11-29 16:54:52.588290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.532 [2024-11-29 16:54:52.588317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.532 [2024-11-29 16:54:52.588360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.532 [2024-11-29 16:54:52.588388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.532 [2024-11-29 16:54:52.588441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.532 [2024-11-29 16:54:52.588471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.588500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.588528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.588556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.588584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.588612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.588642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.532 [2024-11-29 16:54:52.588670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.532 [2024-11-29 16:54:52.588685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.588699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.588713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.588727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.588756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.588769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.588783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.588796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.588818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.588832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.588847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.588859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.588874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.588887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.588902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.588914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.588928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.588941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.588956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.588968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.588983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.588996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.589194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.589222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.589250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.589277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.589304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.589332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.589388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.533 [2024-11-29 16:54:52.589416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.533 [2024-11-29 16:54:52.589722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.533 [2024-11-29 16:54:52.589735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.589764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.589777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.589792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.589805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.589819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.589833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.589847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.589860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.589875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.589888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.589903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.589918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.589935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.589949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.589963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.589976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.589990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.534 [2024-11-29 16:54:52.590574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.534 [2024-11-29 16:54:52.590877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.534 [2024-11-29 16:54:52.590890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.590905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-11-29 16:54:52.590918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.590932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-11-29 16:54:52.590948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.590963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-11-29 16:54:52.590976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.590990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-11-29 16:54:52.591009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-11-29 16:54:52.591037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.535 [2024-11-29 16:54:52.591500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-11-29 16:54:52.591527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-11-29 16:54:52.591554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-11-29 16:54:52.591582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-11-29 16:54:52.591609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-11-29 16:54:52.591636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-11-29 16:54:52.591664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.535 [2024-11-29 16:54:52.591691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.535 [2024-11-29 16:54:52.591791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.535 [2024-11-29 16:54:52.591811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89424 len:8 PRP1 0x0 PRP2 0x0 00:19:35.535 [2024-11-29 16:54:52.591826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.535 [2024-11-29 16:54:52.591874] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:35.535 [2024-11-29 16:54:52.591892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:35.535 [2024-11-29 16:54:52.595478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:35.535 [2024-11-29 16:54:52.595517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fd160 (9): Bad file descriptor 00:19:35.535 [2024-11-29 16:54:52.619393] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:19:35.535 9641.20 IOPS, 37.66 MiB/s [2024-11-29T16:54:59.327Z] 9693.45 IOPS, 37.87 MiB/s [2024-11-29T16:54:59.327Z] 9723.00 IOPS, 37.98 MiB/s [2024-11-29T16:54:59.327Z] 9752.92 IOPS, 38.10 MiB/s [2024-11-29T16:54:59.327Z] 9782.00 IOPS, 38.21 MiB/s [2024-11-29T16:54:59.327Z] 9805.07 IOPS, 38.30 MiB/s 00:19:35.535 Latency(us) 00:19:35.535 [2024-11-29T16:54:59.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.535 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:35.535 Verification LBA range: start 0x0 length 0x4000 00:19:35.535 NVMe0n1 : 15.01 9805.78 38.30 210.36 0.00 12749.57 621.85 14120.03 00:19:35.535 [2024-11-29T16:54:59.327Z] =================================================================================================================== 00:19:35.535 [2024-11-29T16:54:59.327Z] Total : 9805.78 38.30 210.36 0.00 12749.57 621.85 14120.03 00:19:35.535 Received shutdown signal, test time was about 15.000000 seconds 00:19:35.535 00:19:35.535 Latency(us) 00:19:35.535 [2024-11-29T16:54:59.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.535 [2024-11-29T16:54:59.327Z] =================================================================================================================== 00:19:35.535 [2024-11-29T16:54:59.327Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.535 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:35.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.535 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:35.535 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:35.535 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=92098 00:19:35.535 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 92098 /var/tmp/bdevperf.sock 00:19:35.536 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 92098 ']' 00:19:35.536 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.536 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:35.536 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.536 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.536 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.536 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:35.536 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.536 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:35.536 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:35.536 [2024-11-29 16:54:58.907618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:35.536 16:54:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:35.536 [2024-11-29 16:54:59.143979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:35.536 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:35.795 NVMe0n1 00:19:35.795 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:36.055 00:19:36.055 16:54:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:36.623 00:19:36.623 16:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:36.623 16:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:36.883 16:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:37.143 16:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:40.431 16:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:40.431 16:55:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:40.431 16:55:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=92173 00:19:40.431 16:55:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:40.431 16:55:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 92173 00:19:41.368 { 00:19:41.368 "results": [ 00:19:41.368 { 00:19:41.368 "job": "NVMe0n1", 00:19:41.368 "core_mask": "0x1", 00:19:41.368 "workload": "verify", 00:19:41.368 "status": "finished", 00:19:41.368 "verify_range": { 00:19:41.368 "start": 0, 00:19:41.368 "length": 16384 00:19:41.368 }, 00:19:41.368 "queue_depth": 128, 00:19:41.368 "io_size": 4096, 00:19:41.368 "runtime": 1.005236, 00:19:41.368 "iops": 7682.773000569021, 00:19:41.368 "mibps": 30.010832033472738, 00:19:41.368 "io_failed": 0, 00:19:41.368 "io_timeout": 0, 00:19:41.368 "avg_latency_us": 16587.625485150613, 00:19:41.368 "min_latency_us": 953.2509090909091, 00:19:41.368 "max_latency_us": 14120.02909090909 00:19:41.368 } 00:19:41.368 ], 00:19:41.368 "core_count": 1 00:19:41.368 } 00:19:41.626 16:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:41.626 [2024-11-29 16:54:58.396375] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:41.626 [2024-11-29 16:54:58.396491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92098 ] 00:19:41.626 [2024-11-29 16:54:58.522357] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:41.626 [2024-11-29 16:54:58.548704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.626 [2024-11-29 16:54:58.567440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.626 [2024-11-29 16:54:58.595963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:41.626 [2024-11-29 16:55:00.694080] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:41.626 [2024-11-29 16:55:00.694190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.626 [2024-11-29 16:55:00.694214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.626 [2024-11-29 16:55:00.694232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.626 [2024-11-29 16:55:00.694245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.626 [2024-11-29 16:55:00.694258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.626 [2024-11-29 16:55:00.694270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.626 [2024-11-29 16:55:00.694283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:41.626 [2024-11-29 16:55:00.694296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.626 [2024-11-29 16:55:00.694309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:19:41.626 [2024-11-29 16:55:00.694388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ac160 (9): Bad file descriptor 00:19:41.626 [2024-11-29 16:55:00.694420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:19:41.626 [2024-11-29 16:55:00.704509] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:19:41.626 Running I/O for 1 seconds... 00:19:41.626 7580.00 IOPS, 29.61 MiB/s 00:19:41.626 Latency(us) 00:19:41.626 [2024-11-29T16:55:05.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.626 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:41.626 Verification LBA range: start 0x0 length 0x4000 00:19:41.626 NVMe0n1 : 1.01 7682.77 30.01 0.00 0.00 16587.63 953.25 14120.03 00:19:41.626 [2024-11-29T16:55:05.418Z] =================================================================================================================== 00:19:41.626 [2024-11-29T16:55:05.418Z] Total : 7682.77 30.01 0.00 0.00 16587.63 953.25 14120.03 00:19:41.626 16:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:41.626 16:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:41.884 16:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:41.884 16:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:42.142 16:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:42.401 16:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:42.661 16:55:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 92098 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 92098 ']' 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 92098 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92098 00:19:45.950 killing process with pid 92098 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92098' 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 92098 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 92098 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:45.950 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:46.210 rmmod nvme_tcp 00:19:46.210 rmmod nvme_fabrics 00:19:46.210 rmmod nvme_keyring 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 91863 ']' 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 91863 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 91863 ']' 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 91863 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.210 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91863 00:19:46.469 killing process with pid 91863 00:19:46.469 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:46.469 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:46.470 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91863' 00:19:46.470 16:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 91863 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 91863 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:46.470 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:19:46.729 00:19:46.729 real 0m30.918s 00:19:46.729 user 1m59.293s 00:19:46.729 sys 0m5.332s 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:46.729 ************************************ 00:19:46.729 END TEST nvmf_failover 00:19:46.729 ************************************ 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.729 ************************************ 00:19:46.729 START TEST nvmf_host_discovery 00:19:46.729 ************************************ 00:19:46.729 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:46.729 * Looking for test storage... 00:19:46.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:46.989 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:46.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.989 --rc genhtml_branch_coverage=1 00:19:46.989 --rc genhtml_function_coverage=1 00:19:46.990 --rc genhtml_legend=1 00:19:46.990 --rc geninfo_all_blocks=1 00:19:46.990 --rc geninfo_unexecuted_blocks=1 00:19:46.990 00:19:46.990 ' 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:46.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.990 --rc genhtml_branch_coverage=1 00:19:46.990 --rc genhtml_function_coverage=1 00:19:46.990 --rc genhtml_legend=1 00:19:46.990 --rc geninfo_all_blocks=1 00:19:46.990 --rc geninfo_unexecuted_blocks=1 00:19:46.990 00:19:46.990 ' 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:46.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.990 --rc genhtml_branch_coverage=1 00:19:46.990 --rc genhtml_function_coverage=1 00:19:46.990 --rc genhtml_legend=1 00:19:46.990 --rc geninfo_all_blocks=1 00:19:46.990 --rc geninfo_unexecuted_blocks=1 00:19:46.990 00:19:46.990 ' 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:46.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.990 --rc genhtml_branch_coverage=1 00:19:46.990 --rc genhtml_function_coverage=1 00:19:46.990 --rc genhtml_legend=1 00:19:46.990 --rc geninfo_all_blocks=1 00:19:46.990 --rc geninfo_unexecuted_blocks=1 00:19:46.990 00:19:46.990 ' 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:46.990 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:46.990 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:46.991 Cannot find device "nvmf_init_br" 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:46.991 Cannot find device "nvmf_init_br2" 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:46.991 Cannot find device "nvmf_tgt_br" 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:46.991 Cannot find device "nvmf_tgt_br2" 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:46.991 Cannot find device "nvmf_init_br" 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:46.991 Cannot find device "nvmf_init_br2" 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:46.991 Cannot find device "nvmf_tgt_br" 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:46.991 Cannot find device "nvmf_tgt_br2" 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:46.991 Cannot find device "nvmf_br" 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:46.991 Cannot find device "nvmf_init_if" 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:46.991 Cannot find device "nvmf_init_if2" 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:46.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:46.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:46.991 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:47.251 16:55:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:47.251 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:47.251 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:47.251 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:47.251 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:47.251 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.314 ms 00:19:47.251 00:19:47.251 --- 10.0.0.3 ping statistics --- 00:19:47.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.251 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:19:47.251 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:47.251 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:47.251 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:19:47.251 00:19:47.251 --- 10.0.0.4 ping statistics --- 00:19:47.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.251 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:47.251 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:47.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:47.251 00:19:47.251 --- 10.0.0.1 ping statistics --- 00:19:47.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.251 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:47.251 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:47.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:19:47.251 00:19:47.251 --- 10.0.0.2 ping statistics --- 00:19:47.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.251 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:47.251 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.251 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:19:47.251 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:47.251 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.251 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:47.251 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:47.251 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.251 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:47.252 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:47.511 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:47.511 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:47.511 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.511 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.511 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=92494 00:19:47.511 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:47.511 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 92494 00:19:47.511 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 92494 ']' 00:19:47.511 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.511 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.511 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.511 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.511 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.511 [2024-11-29 16:55:11.109187] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:47.511 [2024-11-29 16:55:11.109279] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.511 [2024-11-29 16:55:11.228243] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:47.511 [2024-11-29 16:55:11.252903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.511 [2024-11-29 16:55:11.271161] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.512 [2024-11-29 16:55:11.271229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.512 [2024-11-29 16:55:11.271255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.512 [2024-11-29 16:55:11.271262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.512 [2024-11-29 16:55:11.271268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.512 [2024-11-29 16:55:11.271591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.512 [2024-11-29 16:55:11.298827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:47.771 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.771 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:19:47.771 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:47.771 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.771 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.771 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.771 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.772 [2024-11-29 16:55:11.394933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.772 [2024-11-29 16:55:11.403041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.772 null0 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.772 null1 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=92513 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 92513 /tmp/host.sock 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 92513 ']' 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.772 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.772 16:55:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.772 [2024-11-29 16:55:11.496293] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:47.772 [2024-11-29 16:55:11.496405] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92513 ] 00:19:48.030 [2024-11-29 16:55:11.622215] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:48.030 [2024-11-29 16:55:11.649283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.030 [2024-11-29 16:55:11.667852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.030 [2024-11-29 16:55:11.694188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:48.598 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.598 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:19:48.598 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:48.598 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:48.598 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.598 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.598 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.598 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:48.598 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.598 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.598 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.598 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:48.598 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:48.857 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:48.858 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.117 [2024-11-29 16:55:12.743371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.117 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:19:49.376 16:55:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:19:49.634 [2024-11-29 16:55:13.387672] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:49.634 [2024-11-29 16:55:13.387870] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:49.634 [2024-11-29 16:55:13.387902] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:49.634 [2024-11-29 16:55:13.393703] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:49.893 [2024-11-29 16:55:13.448011] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:49.893 [2024-11-29 16:55:13.449029] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x18ae7b0:1 started. 00:19:49.893 [2024-11-29 16:55:13.450359] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:49.893 [2024-11-29 16:55:13.450395] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:49.893 [2024-11-29 16:55:13.456271] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x18ae7b0 was disconnected and freed. delete nvme_qpair. 00:19:50.460 16:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:50.460 16:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:50.460 16:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:50.460 16:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:50.460 16:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.460 16:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:50.460 16:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:50.460 16:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.460 16:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:50.460 16:55:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.460 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.460 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:50.460 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:50.460 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:50.460 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:50.460 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:50.460 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:50.460 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:50.460 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.461 [2024-11-29 16:55:14.199278] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x18bc170:1 started. 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.461 [2024-11-29 16:55:14.206669] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x18bc170 was disconnected and freed. delete nvme_qpair. 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:50.461 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.721 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.722 [2024-11-29 16:55:14.312573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:50.722 [2024-11-29 16:55:14.313445] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:50.722 [2024-11-29 16:55:14.313637] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:50.722 [2024-11-29 16:55:14.319480] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:50.722 [2024-11-29 16:55:14.381864] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:19:50.722 [2024-11-29 16:55:14.381907] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:50.722 [2024-11-29 16:55:14.381918] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:50.722 [2024-11-29 16:55:14.381923] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.722 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.982 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:50.982 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:50.982 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:50.982 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.983 [2024-11-29 16:55:14.541864] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:50.983 [2024-11-29 16:55:14.541889] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:50.983 [2024-11-29 16:55:14.547893] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:19:50.983 [2024-11-29 16:55:14.547918] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:50.983 [2024-11-29 16:55:14.548023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.983 [2024-11-29 16:55:14.548069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.983 [2024-11-29 16:55:14.548097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.983 [2024-11-29 16:55:14.548106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.983 [2024-11-29 16:55:14.548116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.983 [2024-11-29 16:55:14.548124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.983 [2024-11-29 16:55:14.548134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:50.983 [2024-11-29 16:55:14.548142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.983 [2024-11-29 16:55:14.548151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ebc0 is same with the state(6) to be set 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.983 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.242 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.242 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.243 16:55:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.176 [2024-11-29 16:55:15.958553] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:52.176 [2024-11-29 16:55:15.958576] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:52.176 [2024-11-29 16:55:15.958592] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:52.176 [2024-11-29 16:55:15.964604] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:19:52.434 [2024-11-29 16:55:16.022935] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:19:52.434 [2024-11-29 16:55:16.023522] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x18bb8b0:1 started. 00:19:52.434 [2024-11-29 16:55:16.025393] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:52.434 [2024-11-29 16:55:16.025438] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:52.434 [2024-11-29 16:55:16.027223] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x18bb8b0 was disconnected and freed. delete nvme_qpair. 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.434 request: 00:19:52.434 { 00:19:52.434 "name": "nvme", 00:19:52.434 "trtype": "tcp", 00:19:52.434 "traddr": "10.0.0.3", 00:19:52.434 "adrfam": "ipv4", 00:19:52.434 "trsvcid": "8009", 00:19:52.434 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:52.434 "wait_for_attach": true, 00:19:52.434 "method": "bdev_nvme_start_discovery", 00:19:52.434 "req_id": 1 00:19:52.434 } 00:19:52.434 Got JSON-RPC error response 00:19:52.434 response: 00:19:52.434 { 00:19:52.434 "code": -17, 00:19:52.434 "message": "File exists" 00:19:52.434 } 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.434 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.435 request: 00:19:52.435 { 00:19:52.435 "name": "nvme_second", 00:19:52.435 "trtype": "tcp", 00:19:52.435 "traddr": "10.0.0.3", 00:19:52.435 "adrfam": "ipv4", 00:19:52.435 "trsvcid": "8009", 00:19:52.435 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:52.435 "wait_for_attach": true, 00:19:52.435 "method": "bdev_nvme_start_discovery", 00:19:52.435 "req_id": 1 00:19:52.435 } 00:19:52.435 Got JSON-RPC error response 00:19:52.435 response: 00:19:52.435 { 00:19:52.435 "code": -17, 00:19:52.435 "message": "File exists" 00:19:52.435 } 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:52.435 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.694 16:55:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:53.630 [2024-11-29 16:55:17.281903] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.630 [2024-11-29 16:55:17.281964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1893df0 with addr=10.0.0.3, port=8010 00:19:53.630 [2024-11-29 16:55:17.281981] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:53.630 [2024-11-29 16:55:17.281990] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:53.630 [2024-11-29 16:55:17.281998] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:54.566 [2024-11-29 16:55:18.281880] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:54.566 [2024-11-29 16:55:18.281933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bdbc0 with addr=10.0.0.3, port=8010 00:19:54.566 [2024-11-29 16:55:18.281948] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:54.566 [2024-11-29 16:55:18.281957] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:54.566 [2024-11-29 16:55:18.281964] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:55.502 [2024-11-29 16:55:19.281816] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:19:55.502 request: 00:19:55.502 { 00:19:55.502 "name": "nvme_second", 00:19:55.502 "trtype": "tcp", 00:19:55.502 "traddr": "10.0.0.3", 00:19:55.502 "adrfam": "ipv4", 00:19:55.502 "trsvcid": "8010", 00:19:55.502 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:55.502 "wait_for_attach": false, 00:19:55.502 "attach_timeout_ms": 3000, 00:19:55.502 "method": "bdev_nvme_start_discovery", 00:19:55.502 "req_id": 1 00:19:55.502 } 00:19:55.502 Got JSON-RPC error response 00:19:55.502 response: 00:19:55.502 { 00:19:55.502 "code": -110, 00:19:55.502 "message": "Connection timed out" 00:19:55.502 } 00:19:55.502 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:55.502 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:55.502 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:55.502 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:55.502 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:55.502 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:55.761 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:55.761 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:55.761 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 92513 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:55.762 rmmod nvme_tcp 00:19:55.762 rmmod nvme_fabrics 00:19:55.762 rmmod nvme_keyring 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 92494 ']' 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 92494 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 92494 ']' 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 92494 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92494 00:19:55.762 killing process with pid 92494 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92494' 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 92494 00:19:55.762 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 92494 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:56.022 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:56.282 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:56.282 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.282 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.282 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.282 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:19:56.282 00:19:56.282 real 0m9.424s 00:19:56.282 user 0m18.155s 00:19:56.282 sys 0m1.825s 00:19:56.282 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.282 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:56.282 ************************************ 00:19:56.282 END TEST nvmf_host_discovery 00:19:56.282 ************************************ 00:19:56.282 16:55:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:56.282 16:55:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:56.282 16:55:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.282 16:55:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.282 ************************************ 00:19:56.282 START TEST nvmf_host_multipath_status 00:19:56.282 ************************************ 00:19:56.282 16:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:56.282 * Looking for test storage... 00:19:56.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:56.282 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:56.282 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:19:56.282 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:56.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.541 --rc genhtml_branch_coverage=1 00:19:56.541 --rc genhtml_function_coverage=1 00:19:56.541 --rc genhtml_legend=1 00:19:56.541 --rc geninfo_all_blocks=1 00:19:56.541 --rc geninfo_unexecuted_blocks=1 00:19:56.541 00:19:56.541 ' 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:56.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.541 --rc genhtml_branch_coverage=1 00:19:56.541 --rc genhtml_function_coverage=1 00:19:56.541 --rc genhtml_legend=1 00:19:56.541 --rc geninfo_all_blocks=1 00:19:56.541 --rc geninfo_unexecuted_blocks=1 00:19:56.541 00:19:56.541 ' 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:56.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.541 --rc genhtml_branch_coverage=1 00:19:56.541 --rc genhtml_function_coverage=1 00:19:56.541 --rc genhtml_legend=1 00:19:56.541 --rc geninfo_all_blocks=1 00:19:56.541 --rc geninfo_unexecuted_blocks=1 00:19:56.541 00:19:56.541 ' 00:19:56.541 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:56.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.541 --rc genhtml_branch_coverage=1 00:19:56.541 --rc genhtml_function_coverage=1 00:19:56.542 --rc genhtml_legend=1 00:19:56.542 --rc geninfo_all_blocks=1 00:19:56.542 --rc geninfo_unexecuted_blocks=1 00:19:56.542 00:19:56.542 ' 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:56.542 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:56.542 Cannot find device "nvmf_init_br" 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:56.542 Cannot find device "nvmf_init_br2" 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:56.542 Cannot find device "nvmf_tgt_br" 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:56.542 Cannot find device "nvmf_tgt_br2" 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:56.542 Cannot find device "nvmf_init_br" 00:19:56.542 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:56.543 Cannot find device "nvmf_init_br2" 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:56.543 Cannot find device "nvmf_tgt_br" 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:56.543 Cannot find device "nvmf_tgt_br2" 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:56.543 Cannot find device "nvmf_br" 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:56.543 Cannot find device "nvmf_init_if" 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:56.543 Cannot find device "nvmf_init_if2" 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:56.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:56.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:56.543 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:56.802 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:56.803 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:56.803 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:19:56.803 00:19:56.803 --- 10.0.0.3 ping statistics --- 00:19:56.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.803 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:56.803 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:56.803 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:19:56.803 00:19:56.803 --- 10.0.0.4 ping statistics --- 00:19:56.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.803 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:56.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:19:56.803 00:19:56.803 --- 10.0.0.1 ping statistics --- 00:19:56.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.803 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:56.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:19:56.803 00:19:56.803 --- 10.0.0.2 ping statistics --- 00:19:56.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.803 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=93024 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 93024 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 93024 ']' 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.803 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:56.803 [2024-11-29 16:55:20.585303] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:19:56.803 [2024-11-29 16:55:20.585402] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.063 [2024-11-29 16:55:20.712619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:57.063 [2024-11-29 16:55:20.736794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:57.063 [2024-11-29 16:55:20.755187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.063 [2024-11-29 16:55:20.755251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.063 [2024-11-29 16:55:20.755261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.063 [2024-11-29 16:55:20.755267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.063 [2024-11-29 16:55:20.755274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.063 [2024-11-29 16:55:20.756061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.063 [2024-11-29 16:55:20.756074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.063 [2024-11-29 16:55:20.784041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:57.063 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.063 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:19:57.063 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:57.063 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:57.063 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:57.322 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.322 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=93024 00:19:57.322 16:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:57.581 [2024-11-29 16:55:21.148152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.581 16:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:57.839 Malloc0 00:19:57.840 16:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:58.098 16:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:58.357 16:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:58.616 [2024-11-29 16:55:22.198497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:58.616 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:58.875 [2024-11-29 16:55:22.430585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:58.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.875 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=93067 00:19:58.875 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:58.875 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.875 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 93067 /var/tmp/bdevperf.sock 00:19:58.875 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 93067 ']' 00:19:58.875 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.875 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.875 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.875 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.875 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:59.135 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.135 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:19:59.135 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:59.393 16:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:59.652 Nvme0n1 00:19:59.652 16:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:59.910 Nvme0n1 00:19:59.910 16:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:59.910 16:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:02.441 16:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:20:02.441 16:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:02.441 16:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:02.441 16:55:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:20:03.376 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:20:03.376 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:03.376 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.376 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:03.635 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:03.635 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:03.635 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.635 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:03.894 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:03.894 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:03.894 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.894 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:04.152 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.153 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:04.153 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.153 16:55:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:04.411 16:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.411 16:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:04.411 16:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.411 16:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:04.669 16:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.669 16:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:04.669 16:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.669 16:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:04.928 16:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.928 16:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:20:04.928 16:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:05.186 16:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:05.445 16:55:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:06.408 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:06.408 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:06.408 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.408 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:06.671 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:06.671 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:06.671 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.671 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:06.929 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.929 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:06.929 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.929 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:07.188 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.188 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:07.188 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.188 16:55:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:07.446 16:55:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.446 16:55:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:07.446 16:55:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.446 16:55:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:07.706 16:55:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.706 16:55:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:07.706 16:55:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.706 16:55:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:07.965 16:55:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.965 16:55:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:07.965 16:55:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:08.224 16:55:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:08.482 16:55:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:09.415 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:09.415 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:09.415 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.415 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:09.674 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.674 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:09.674 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:09.674 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.240 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:10.240 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:10.240 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:10.240 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.240 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.240 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:10.240 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:10.240 16:55:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.498 16:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.498 16:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:10.498 16:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.498 16:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:10.757 16:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.757 16:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:10.757 16:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.757 16:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:11.015 16:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.016 16:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:11.016 16:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:11.274 16:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:11.532 16:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:12.467 16:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:12.467 16:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:12.467 16:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.467 16:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:13.033 16:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.033 16:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:13.033 16:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.033 16:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:13.291 16:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:13.291 16:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:13.291 16:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.291 16:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:13.291 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.291 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:13.291 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.291 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:13.549 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.549 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:13.549 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.549 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:13.807 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.807 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:13.807 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.807 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:14.372 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:14.372 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:14.372 16:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:14.372 16:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:14.630 16:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:15.563 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:15.563 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:15.563 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:15.563 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.821 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:15.821 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:15.821 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.821 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:16.387 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:16.387 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:16.387 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.387 16:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:16.387 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.387 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:16.387 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.387 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:16.644 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.644 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:16.644 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.644 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:16.902 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:16.903 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:16.903 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.903 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:17.161 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:17.161 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:17.161 16:55:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:17.419 16:55:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:17.677 16:55:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:18.612 16:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:18.612 16:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:18.612 16:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.612 16:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:18.870 16:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:18.870 16:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:18.870 16:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.870 16:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:19.128 16:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.128 16:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:19.128 16:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:19.128 16:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.385 16:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.385 16:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:19.385 16:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:19.385 16:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.643 16:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.643 16:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:19.643 16:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.643 16:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:19.902 16:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:19.902 16:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:19.902 16:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:19.902 16:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.162 16:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.162 16:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:20.420 16:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:20.420 16:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:20.680 16:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:20.939 16:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:21.876 16:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:21.876 16:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:21.876 16:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.876 16:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:22.135 16:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.135 16:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:22.135 16:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.135 16:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:22.394 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.394 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:22.394 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.394 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:22.653 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.653 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:22.653 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.653 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:22.912 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.912 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:22.912 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.912 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:23.171 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.171 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:23.171 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.171 16:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:23.430 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.430 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:23.430 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:23.689 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:23.949 16:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:24.886 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:24.886 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:25.144 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.144 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:25.402 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:25.402 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:25.402 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.402 16:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:25.660 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.660 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:25.660 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.660 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:25.919 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.919 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:25.919 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.919 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:26.178 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.178 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:26.178 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:26.178 16:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.436 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.436 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:26.436 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.436 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:26.695 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.695 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:26.695 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:26.954 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:27.214 16:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:28.151 16:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:28.151 16:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:28.151 16:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.151 16:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:28.411 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:28.411 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:28.411 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:28.411 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.669 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:28.669 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:28.669 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.670 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:28.928 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:28.929 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:28.929 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.929 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:29.497 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.497 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:29.497 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.497 16:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:29.497 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.497 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:29.497 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.497 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:29.756 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.756 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:29.756 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:30.015 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:30.274 16:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:31.225 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:31.225 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:31.225 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:31.225 16:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:31.536 16:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:31.536 16:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:31.536 16:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:31.536 16:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:31.804 16:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:31.804 16:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:31.804 16:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:31.804 16:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:32.063 16:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:32.063 16:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:32.063 16:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:32.063 16:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:32.321 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:32.321 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:32.321 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:32.321 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:32.580 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:32.580 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:32.580 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:32.580 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:32.839 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:32.839 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 93067 00:20:32.839 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 93067 ']' 00:20:32.839 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 93067 00:20:32.839 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:20:32.839 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.839 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93067 00:20:32.839 killing process with pid 93067 00:20:32.839 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:32.839 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:32.839 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93067' 00:20:32.839 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 93067 00:20:32.839 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 93067 00:20:32.839 { 00:20:32.839 "results": [ 00:20:32.839 { 00:20:32.839 "job": "Nvme0n1", 00:20:32.839 "core_mask": "0x4", 00:20:32.839 "workload": "verify", 00:20:32.839 "status": "terminated", 00:20:32.839 "verify_range": { 00:20:32.839 "start": 0, 00:20:32.839 "length": 16384 00:20:32.839 }, 00:20:32.839 "queue_depth": 128, 00:20:32.839 "io_size": 4096, 00:20:32.839 "runtime": 32.861372, 00:20:32.839 "iops": 9377.179991145835, 00:20:32.839 "mibps": 36.62960934041342, 00:20:32.839 "io_failed": 0, 00:20:32.839 "io_timeout": 0, 00:20:32.839 "avg_latency_us": 13622.31210334383, 00:20:32.839 "min_latency_us": 863.8836363636364, 00:20:32.839 "max_latency_us": 4026531.84 00:20:32.839 } 00:20:32.839 ], 00:20:32.839 "core_count": 1 00:20:32.839 } 00:20:33.108 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 93067 00:20:33.108 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:33.108 [2024-11-29 16:55:22.496936] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:20:33.109 [2024-11-29 16:55:22.497031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93067 ] 00:20:33.109 [2024-11-29 16:55:22.615997] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:33.109 [2024-11-29 16:55:22.647452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.109 [2024-11-29 16:55:22.671579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.109 [2024-11-29 16:55:22.705258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:33.109 Running I/O for 90 seconds... 00:20:33.109 7829.00 IOPS, 30.58 MiB/s [2024-11-29T16:55:56.901Z] 7946.00 IOPS, 31.04 MiB/s [2024-11-29T16:55:56.901Z] 7900.33 IOPS, 30.86 MiB/s [2024-11-29T16:55:56.901Z] 7877.25 IOPS, 30.77 MiB/s [2024-11-29T16:55:56.901Z] 7840.80 IOPS, 30.63 MiB/s [2024-11-29T16:55:56.901Z] 8230.17 IOPS, 32.15 MiB/s [2024-11-29T16:55:56.901Z] 8519.57 IOPS, 33.28 MiB/s [2024-11-29T16:55:56.901Z] 8735.62 IOPS, 34.12 MiB/s [2024-11-29T16:55:56.901Z] 8938.33 IOPS, 34.92 MiB/s [2024-11-29T16:55:56.901Z] 9106.10 IOPS, 35.57 MiB/s [2024-11-29T16:55:56.901Z] 9216.45 IOPS, 36.00 MiB/s [2024-11-29T16:55:56.901Z] 9327.58 IOPS, 36.44 MiB/s [2024-11-29T16:55:56.901Z] 9430.38 IOPS, 36.84 MiB/s [2024-11-29T16:55:56.901Z] 9493.29 IOPS, 37.08 MiB/s [2024-11-29T16:55:56.901Z] [2024-11-29 16:55:38.076330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.076403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.076491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.076526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.076560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.076594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.076627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.076659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.076698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.076754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.076791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.076824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.076856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.076889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.076924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.076957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.076976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.076990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.077059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.077096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.077130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.077165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.077210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.077247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.077282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.077317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.077351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.077403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.077437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.077472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.077506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.077541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.077576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.109 [2024-11-29 16:55:38.077610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.109 [2024-11-29 16:55:38.077650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.109 [2024-11-29 16:55:38.077680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.077696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.077717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.077731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.077751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.077765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.077799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.077813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.077832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.077846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.077866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.077880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.077899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.077913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.077932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.077947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.077966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.077980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.077999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.110 [2024-11-29 16:55:38.078222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.110 [2024-11-29 16:55:38.078256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.110 [2024-11-29 16:55:38.078288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.110 [2024-11-29 16:55:38.078323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.110 [2024-11-29 16:55:38.078370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.110 [2024-11-29 16:55:38.078405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.110 [2024-11-29 16:55:38.078454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.110 [2024-11-29 16:55:38.078489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.078976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.078996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.079011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.079032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.079054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.079075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.079090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.079110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.110 [2024-11-29 16:55:38.079125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:33.110 [2024-11-29 16:55:38.079145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.079954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.079991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.080008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.080055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.080091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.111 [2024-11-29 16:55:38.080160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.111 [2024-11-29 16:55:38.080196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.111 [2024-11-29 16:55:38.080231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.111 [2024-11-29 16:55:38.080265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.111 [2024-11-29 16:55:38.080299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.111 [2024-11-29 16:55:38.080333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.111 [2024-11-29 16:55:38.080386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.111 [2024-11-29 16:55:38.080422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.080472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.080508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.080559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.080595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.080631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.080669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.111 [2024-11-29 16:55:38.080704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:33.111 [2024-11-29 16:55:38.080725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.112 [2024-11-29 16:55:38.080740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.080760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.112 [2024-11-29 16:55:38.080775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.080795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.112 [2024-11-29 16:55:38.080810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.080831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.112 [2024-11-29 16:55:38.080845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.080866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.112 [2024-11-29 16:55:38.080881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.080916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.112 [2024-11-29 16:55:38.080930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.080950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.112 [2024-11-29 16:55:38.080965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.080987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.112 [2024-11-29 16:55:38.081008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.081699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.112 [2024-11-29 16:55:38.081742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.081776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:38.081793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.081821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:38.081836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.081864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:38.081894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.081921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:38.081936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.081963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:38.081977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.082007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:38.082023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.082050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:38.082081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:38.082122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:38.082142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:33.112 9108.93 IOPS, 35.58 MiB/s [2024-11-29T16:55:56.904Z] 8539.62 IOPS, 33.36 MiB/s [2024-11-29T16:55:56.904Z] 8037.29 IOPS, 31.40 MiB/s [2024-11-29T16:55:56.904Z] 7590.78 IOPS, 29.65 MiB/s [2024-11-29T16:55:56.904Z] 7541.21 IOPS, 29.46 MiB/s [2024-11-29T16:55:56.904Z] 7676.15 IOPS, 29.98 MiB/s [2024-11-29T16:55:56.904Z] 7858.48 IOPS, 30.70 MiB/s [2024-11-29T16:55:56.904Z] 8151.59 IOPS, 31.84 MiB/s [2024-11-29T16:55:56.904Z] 8374.43 IOPS, 32.71 MiB/s [2024-11-29T16:55:56.904Z] 8569.67 IOPS, 33.48 MiB/s [2024-11-29T16:55:56.904Z] 8653.76 IOPS, 33.80 MiB/s [2024-11-29T16:55:56.904Z] 8715.04 IOPS, 34.04 MiB/s [2024-11-29T16:55:56.904Z] 8770.93 IOPS, 34.26 MiB/s [2024-11-29T16:55:56.904Z] 8939.43 IOPS, 34.92 MiB/s [2024-11-29T16:55:56.904Z] 9099.00 IOPS, 35.54 MiB/s [2024-11-29T16:55:56.904Z] 9255.33 IOPS, 36.15 MiB/s [2024-11-29T16:55:56.904Z] [2024-11-29 16:55:53.960116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:53.960187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:53.960276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:53.960311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:53.960359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:53.960396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:53.960429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.112 [2024-11-29 16:55:53.960462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.112 [2024-11-29 16:55:53.960496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.112 [2024-11-29 16:55:53.960528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.112 [2024-11-29 16:55:53.960561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.112 [2024-11-29 16:55:53.960594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:53.960627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:53.960660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:53.960703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:53.960738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:53.960771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:53.960804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:53.960840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.112 [2024-11-29 16:55:53.960874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:33.112 [2024-11-29 16:55:53.960893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.960906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.960926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.960940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.960959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.960973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.960992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.113 [2024-11-29 16:55:53.961040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.113 [2024-11-29 16:55:53.961074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.113 [2024-11-29 16:55:53.961116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.113 [2024-11-29 16:55:53.961217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.113 [2024-11-29 16:55:53.961251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.113 [2024-11-29 16:55:53.961284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.113 [2024-11-29 16:55:53.961317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.113 [2024-11-29 16:55:53.961467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.113 [2024-11-29 16:55:53.961678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.113 [2024-11-29 16:55:53.961711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.113 [2024-11-29 16:55:53.961744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.113 [2024-11-29 16:55:53.961778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.113 [2024-11-29 16:55:53.961945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.961971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.113 [2024-11-29 16:55:53.961986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:33.113 [2024-11-29 16:55:53.962005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.962019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.962052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.962086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.962119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.962152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.962186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.962220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.962253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.962286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.962320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.962368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.962411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.962444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.962478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.962511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.962544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.962577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.962610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.962630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.962644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.963964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.963994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.964039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.964090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.964138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.964183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.964219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.964253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.964288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.964321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.964354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.964406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.964441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.964474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.964508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.964542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.964575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.114 [2024-11-29 16:55:53.964617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.964670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.964706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.114 [2024-11-29 16:55:53.964739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:33.114 [2024-11-29 16:55:53.964759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.964772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.964792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.964805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.964825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.964838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.964858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.964873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.964892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.964906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.964928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.964943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.964964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.964978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.964997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.965011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.965030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.965044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.965083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.965099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.965118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.965133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.965153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.965167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.965186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.965200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.965220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.965234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.965253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.965267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.965286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.965300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.965319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.965347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.965368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.965383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.965403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.965417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.966763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.966790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.966815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.966830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.966861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.966877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.966897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.966911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.966930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.966944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.966964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.966978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.966997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.967011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.967030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.967044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.967063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.967077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.967102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.967116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.967135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.967149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.967169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.967183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.967202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.967216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.967235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.967249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.967268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.115 [2024-11-29 16:55:53.967289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.967309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.967335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.967359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.967373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.967393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.967407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.967426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.115 [2024-11-29 16:55:53.967440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:33.115 [2024-11-29 16:55:53.967460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.967473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.967493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.967506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.967526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.967540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.967559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.967573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.967592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.967606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.967625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.967639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.967661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.967675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.967695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.967715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.967736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.967794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.967832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.967848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.967870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.967885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.967907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.967923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.967944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.967960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.967986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.968002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.968024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.968039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.968061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.968077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.968113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.968142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.968191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.968206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.968225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.968239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.968258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.968272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.968309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.968324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.968343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.968357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.968398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.968416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.968448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.968465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.968485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.968499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.968519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.968534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.968553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.968568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.968588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.968602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.969758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.969784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.969813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.969829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.969850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.969864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.969883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.969897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.969927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.116 [2024-11-29 16:55:53.969943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.969963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.969977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.969996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.970010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.970029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.970043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.970063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.970076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.970096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.970110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:33.116 [2024-11-29 16:55:53.970129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.116 [2024-11-29 16:55:53.970144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.970177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.970210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.970243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.970277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.970310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.970369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.970425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.970461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.970494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.970527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.970560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.970593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.970641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.970673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.970705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.970737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.970769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.970809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.970843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.970875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.970908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.970940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.970977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.970999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.971014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.971032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.971046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.972213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.972239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.972264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.972279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.972299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.972313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.972333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.972358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.972378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.972392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.972438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.972456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.972493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.117 [2024-11-29 16:55:53.972508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.972528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.972543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:33.117 [2024-11-29 16:55:53.972564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.117 [2024-11-29 16:55:53.972579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.972609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.972624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.972660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.972675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.972694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.972708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.972728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.972742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.972763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.972777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.972800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.972815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.972834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.972849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.972869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.972883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.972912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.972927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.972976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.972995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.973016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.973030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.973049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.973063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.973083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.973097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.973116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.973129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.973148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.973162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.973181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.973195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.973214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.973228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.973247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.973261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.973280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.973294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.974310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.974392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.974431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.974466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.974501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.974535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.974569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.974603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.974637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.974686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.974720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.974753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.974786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.974831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.974886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.974934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.974968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.974986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.975000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.975018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.118 [2024-11-29 16:55:53.975032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:33.118 [2024-11-29 16:55:53.975051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.118 [2024-11-29 16:55:53.975065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.975083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.975097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.975116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.975129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.975148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.975162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.976874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.976900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.976924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.976940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.976960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.976974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.977019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.977051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.977084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.977120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.977152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.977185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.977217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.977249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.977281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.977313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.977379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.977429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.977473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.977507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.977541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.977575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.977610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.977644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.977693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.977741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.977773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.977805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.977838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.977857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.977870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.986672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.986717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.986742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.986757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.986782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.986798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.986817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.986831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.986849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.986863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.986881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.986895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.986913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.986926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.986945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.119 [2024-11-29 16:55:53.986958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:33.119 [2024-11-29 16:55:53.986976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.119 [2024-11-29 16:55:53.986989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:33.120 [2024-11-29 16:55:53.987008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.120 [2024-11-29 16:55:53.987022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:33.120 [2024-11-29 16:55:53.988755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.120 [2024-11-29 16:55:53.988796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:33.120 [2024-11-29 16:55:53.988850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.120 [2024-11-29 16:55:53.988876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:33.120 [2024-11-29 16:55:53.988907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.120 [2024-11-29 16:55:53.988944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:33.120 [2024-11-29 16:55:53.988975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.120 [2024-11-29 16:55:53.988995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:33.120 [2024-11-29 16:55:53.989024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.120 [2024-11-29 16:55:53.989044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:33.120 9323.71 IOPS, 36.42 MiB/s [2024-11-29T16:55:56.912Z] 9356.59 IOPS, 36.55 MiB/s [2024-11-29T16:55:56.912Z] Received shutdown signal, test time was about 32.862159 seconds 00:20:33.120 00:20:33.120 Latency(us) 00:20:33.120 [2024-11-29T16:55:56.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.120 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:33.120 Verification LBA range: start 0x0 length 0x4000 00:20:33.120 Nvme0n1 : 32.86 9377.18 36.63 0.00 0.00 13622.31 863.88 4026531.84 00:20:33.120 [2024-11-29T16:55:56.912Z] =================================================================================================================== 00:20:33.120 [2024-11-29T16:55:56.912Z] Total : 9377.18 36.63 0.00 0.00 13622.31 863.88 4026531.84 00:20:33.120 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:33.379 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:33.379 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:33.379 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:33.379 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.379 16:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.379 rmmod nvme_tcp 00:20:33.379 rmmod nvme_fabrics 00:20:33.379 rmmod nvme_keyring 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 93024 ']' 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 93024 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 93024 ']' 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 93024 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93024 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.379 killing process with pid 93024 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93024' 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 93024 00:20:33.379 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 93024 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.639 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:20:33.899 00:20:33.899 real 0m37.545s 00:20:33.899 user 2m0.973s 00:20:33.899 sys 0m11.631s 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:33.899 ************************************ 00:20:33.899 END TEST nvmf_host_multipath_status 00:20:33.899 ************************************ 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.899 ************************************ 00:20:33.899 START TEST nvmf_discovery_remove_ifc 00:20:33.899 ************************************ 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:33.899 * Looking for test storage... 00:20:33.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:33.899 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:34.159 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:34.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.160 --rc genhtml_branch_coverage=1 00:20:34.160 --rc genhtml_function_coverage=1 00:20:34.160 --rc genhtml_legend=1 00:20:34.160 --rc geninfo_all_blocks=1 00:20:34.160 --rc geninfo_unexecuted_blocks=1 00:20:34.160 00:20:34.160 ' 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:34.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.160 --rc genhtml_branch_coverage=1 00:20:34.160 --rc genhtml_function_coverage=1 00:20:34.160 --rc genhtml_legend=1 00:20:34.160 --rc geninfo_all_blocks=1 00:20:34.160 --rc geninfo_unexecuted_blocks=1 00:20:34.160 00:20:34.160 ' 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:34.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.160 --rc genhtml_branch_coverage=1 00:20:34.160 --rc genhtml_function_coverage=1 00:20:34.160 --rc genhtml_legend=1 00:20:34.160 --rc geninfo_all_blocks=1 00:20:34.160 --rc geninfo_unexecuted_blocks=1 00:20:34.160 00:20:34.160 ' 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:34.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.160 --rc genhtml_branch_coverage=1 00:20:34.160 --rc genhtml_function_coverage=1 00:20:34.160 --rc genhtml_legend=1 00:20:34.160 --rc geninfo_all_blocks=1 00:20:34.160 --rc geninfo_unexecuted_blocks=1 00:20:34.160 00:20:34.160 ' 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:34.160 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:34.160 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:34.161 Cannot find device "nvmf_init_br" 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:34.161 Cannot find device "nvmf_init_br2" 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:34.161 Cannot find device "nvmf_tgt_br" 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:34.161 Cannot find device "nvmf_tgt_br2" 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:34.161 Cannot find device "nvmf_init_br" 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:34.161 Cannot find device "nvmf_init_br2" 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:34.161 Cannot find device "nvmf_tgt_br" 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:34.161 Cannot find device "nvmf_tgt_br2" 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:34.161 Cannot find device "nvmf_br" 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:34.161 Cannot find device "nvmf_init_if" 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:34.161 Cannot find device "nvmf_init_if2" 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:34.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:34.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:34.161 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:34.420 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:34.420 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:34.420 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:34.420 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:34.420 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:34.420 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:34.420 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:34.420 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:34.420 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:34.420 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:34.420 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:34.420 16:55:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:34.420 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:34.420 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:34.421 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:34.421 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:20:34.421 00:20:34.421 --- 10.0.0.3 ping statistics --- 00:20:34.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.421 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:34.421 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:34.421 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:20:34.421 00:20:34.421 --- 10.0.0.4 ping statistics --- 00:20:34.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.421 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:34.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:34.421 00:20:34.421 --- 10.0.0.1 ping statistics --- 00:20:34.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.421 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:34.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:20:34.421 00:20:34.421 --- 10.0.0.2 ping statistics --- 00:20:34.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.421 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=93893 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 93893 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 93893 ']' 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.421 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.421 [2024-11-29 16:55:58.201818] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:20:34.421 [2024-11-29 16:55:58.201908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.679 [2024-11-29 16:55:58.328666] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:34.679 [2024-11-29 16:55:58.355854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.679 [2024-11-29 16:55:58.374735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.679 [2024-11-29 16:55:58.374794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.680 [2024-11-29 16:55:58.374803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.680 [2024-11-29 16:55:58.374810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.680 [2024-11-29 16:55:58.374816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.680 [2024-11-29 16:55:58.375071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.680 [2024-11-29 16:55:58.402131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:34.680 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.680 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:34.680 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:34.680 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:34.680 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.938 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.938 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:34.938 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.938 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.938 [2024-11-29 16:55:58.517589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.939 [2024-11-29 16:55:58.525741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:34.939 null0 00:20:34.939 [2024-11-29 16:55:58.557606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:34.939 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.939 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=93917 00:20:34.939 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:34.939 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 93917 /tmp/host.sock 00:20:34.939 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 93917 ']' 00:20:34.939 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:34.939 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.939 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:34.939 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:34.939 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.939 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.939 [2024-11-29 16:55:58.627160] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:20:34.939 [2024-11-29 16:55:58.627237] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93917 ] 00:20:35.198 [2024-11-29 16:55:58.746075] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:35.198 [2024-11-29 16:55:58.779231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.198 [2024-11-29 16:55:58.802979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.198 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.198 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:35.198 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:35.198 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:35.198 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.198 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:35.198 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.198 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:35.198 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.198 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:35.198 [2024-11-29 16:55:58.936187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:35.198 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.198 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:35.198 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.198 16:55:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:36.574 [2024-11-29 16:55:59.970382] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:36.574 [2024-11-29 16:55:59.970406] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:36.574 [2024-11-29 16:55:59.970438] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:36.574 [2024-11-29 16:55:59.976438] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:36.574 [2024-11-29 16:56:00.030849] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:36.574 [2024-11-29 16:56:00.031821] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x23dd320:1 started. 00:20:36.574 [2024-11-29 16:56:00.033363] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:36.574 [2024-11-29 16:56:00.033437] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:36.574 [2024-11-29 16:56:00.033462] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:36.574 [2024-11-29 16:56:00.033477] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:36.574 [2024-11-29 16:56:00.033501] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:36.574 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.574 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:36.575 [2024-11-29 16:56:00.039081] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x23dd320 was disconnected and freed. delete nvme_qpair. 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:36.575 16:56:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:37.511 16:56:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:37.511 16:56:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.511 16:56:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.511 16:56:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:37.511 16:56:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:37.511 16:56:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:37.511 16:56:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:37.511 16:56:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.511 16:56:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:37.511 16:56:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:38.446 16:56:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:38.446 16:56:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:38.446 16:56:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:38.446 16:56:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.446 16:56:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:38.446 16:56:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:38.446 16:56:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:38.705 16:56:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.705 16:56:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:38.705 16:56:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:39.642 16:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:39.642 16:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:39.642 16:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:39.642 16:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.642 16:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:39.642 16:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:39.642 16:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:39.642 16:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.642 16:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:39.642 16:56:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:40.581 16:56:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:40.581 16:56:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:40.581 16:56:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.581 16:56:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:40.581 16:56:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:40.581 16:56:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:40.581 16:56:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:40.581 16:56:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.840 16:56:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:40.840 16:56:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:41.785 16:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:41.785 16:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:41.785 16:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.785 16:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:41.785 16:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:41.785 16:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:41.785 16:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:41.785 16:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.785 16:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:41.785 16:56:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:41.785 [2024-11-29 16:56:05.461277] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:41.785 [2024-11-29 16:56:05.461391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.785 [2024-11-29 16:56:05.461407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.785 [2024-11-29 16:56:05.461420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.785 [2024-11-29 16:56:05.461428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.785 [2024-11-29 16:56:05.461437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.785 [2024-11-29 16:56:05.461445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.785 [2024-11-29 16:56:05.461454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.785 [2024-11-29 16:56:05.461462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.785 [2024-11-29 16:56:05.461487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.785 [2024-11-29 16:56:05.461496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.785 [2024-11-29 16:56:05.461504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b8e50 is same with the state(6) to be set 00:20:41.785 [2024-11-29 16:56:05.471277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b8e50 (9): Bad file descriptor 00:20:41.785 [2024-11-29 16:56:05.481294] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:41.785 [2024-11-29 16:56:05.481316] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:41.785 [2024-11-29 16:56:05.481338] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:41.785 [2024-11-29 16:56:05.481352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:41.785 [2024-11-29 16:56:05.481400] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:42.721 16:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:42.721 16:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:42.721 16:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.721 16:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:42.721 16:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:42.722 16:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:42.722 16:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:42.981 [2024-11-29 16:56:06.516387] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:42.981 [2024-11-29 16:56:06.516464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b8e50 with addr=10.0.0.3, port=4420 00:20:42.981 [2024-11-29 16:56:06.516480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b8e50 is same with the state(6) to be set 00:20:42.981 [2024-11-29 16:56:06.516510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b8e50 (9): Bad file descriptor 00:20:42.981 [2024-11-29 16:56:06.516904] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:20:42.981 [2024-11-29 16:56:06.516936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:42.981 [2024-11-29 16:56:06.516947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:42.981 [2024-11-29 16:56:06.516957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:42.981 [2024-11-29 16:56:06.516965] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:42.981 [2024-11-29 16:56:06.516979] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:42.981 [2024-11-29 16:56:06.516985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:42.981 [2024-11-29 16:56:06.516995] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:42.981 [2024-11-29 16:56:06.517000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:42.981 16:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.981 16:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:42.981 16:56:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:43.918 [2024-11-29 16:56:07.517023] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:43.918 [2024-11-29 16:56:07.517066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:43.918 [2024-11-29 16:56:07.517085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:43.918 [2024-11-29 16:56:07.517109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:43.918 [2024-11-29 16:56:07.517117] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:20:43.918 [2024-11-29 16:56:07.517125] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:43.918 [2024-11-29 16:56:07.517130] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:43.918 [2024-11-29 16:56:07.517134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:43.918 [2024-11-29 16:56:07.517160] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:20:43.918 [2024-11-29 16:56:07.517189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.918 [2024-11-29 16:56:07.517203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.918 [2024-11-29 16:56:07.517214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.918 [2024-11-29 16:56:07.517222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.918 [2024-11-29 16:56:07.517230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.918 [2024-11-29 16:56:07.517237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.918 [2024-11-29 16:56:07.517245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.918 [2024-11-29 16:56:07.517253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.918 [2024-11-29 16:56:07.517261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.918 [2024-11-29 16:56:07.517284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.918 [2024-11-29 16:56:07.517293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:20:43.918 [2024-11-29 16:56:07.517504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a7390 (9): Bad file descriptor 00:20:43.918 [2024-11-29 16:56:07.518516] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:43.918 [2024-11-29 16:56:07.518554] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:43.918 16:56:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:45.296 16:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:45.296 16:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:45.296 16:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:45.296 16:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:45.296 16:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:45.296 16:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.296 16:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:45.296 16:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.296 16:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:45.296 16:56:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:45.864 [2024-11-29 16:56:09.524527] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:45.864 [2024-11-29 16:56:09.524551] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:45.864 [2024-11-29 16:56:09.524583] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:45.864 [2024-11-29 16:56:09.530558] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:20:45.864 [2024-11-29 16:56:09.584838] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:20:45.864 [2024-11-29 16:56:09.585526] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x2393a50:1 started. 00:20:45.864 [2024-11-29 16:56:09.586699] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:45.864 [2024-11-29 16:56:09.586781] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:45.864 [2024-11-29 16:56:09.586804] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:45.864 [2024-11-29 16:56:09.586819] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:20:45.864 [2024-11-29 16:56:09.586828] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:45.864 [2024-11-29 16:56:09.593215] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x2393a50 was disconnected and freed. delete nvme_qpair. 00:20:46.122 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:46.122 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:46.122 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 93917 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 93917 ']' 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 93917 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93917 00:20:46.123 killing process with pid 93917 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93917' 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 93917 00:20:46.123 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 93917 00:20:46.382 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:46.382 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:46.382 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:20:46.382 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:46.382 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:20:46.382 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:46.382 16:56:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:46.382 rmmod nvme_tcp 00:20:46.382 rmmod nvme_fabrics 00:20:46.382 rmmod nvme_keyring 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 93893 ']' 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 93893 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 93893 ']' 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 93893 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93893 00:20:46.382 killing process with pid 93893 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93893' 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 93893 00:20:46.382 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 93893 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.640 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.898 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:20:46.898 00:20:46.898 real 0m12.942s 00:20:46.898 user 0m22.036s 00:20:46.898 sys 0m2.429s 00:20:46.898 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.898 16:56:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:46.898 ************************************ 00:20:46.898 END TEST nvmf_discovery_remove_ifc 00:20:46.898 ************************************ 00:20:46.898 16:56:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:46.898 16:56:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:46.898 16:56:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.898 16:56:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.898 ************************************ 00:20:46.898 START TEST nvmf_identify_kernel_target 00:20:46.898 ************************************ 00:20:46.898 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:46.898 * Looking for test storage... 00:20:46.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:46.898 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:46.898 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:20:46.898 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:46.898 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:46.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.899 --rc genhtml_branch_coverage=1 00:20:46.899 --rc genhtml_function_coverage=1 00:20:46.899 --rc genhtml_legend=1 00:20:46.899 --rc geninfo_all_blocks=1 00:20:46.899 --rc geninfo_unexecuted_blocks=1 00:20:46.899 00:20:46.899 ' 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:46.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.899 --rc genhtml_branch_coverage=1 00:20:46.899 --rc genhtml_function_coverage=1 00:20:46.899 --rc genhtml_legend=1 00:20:46.899 --rc geninfo_all_blocks=1 00:20:46.899 --rc geninfo_unexecuted_blocks=1 00:20:46.899 00:20:46.899 ' 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:46.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.899 --rc genhtml_branch_coverage=1 00:20:46.899 --rc genhtml_function_coverage=1 00:20:46.899 --rc genhtml_legend=1 00:20:46.899 --rc geninfo_all_blocks=1 00:20:46.899 --rc geninfo_unexecuted_blocks=1 00:20:46.899 00:20:46.899 ' 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:46.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.899 --rc genhtml_branch_coverage=1 00:20:46.899 --rc genhtml_function_coverage=1 00:20:46.899 --rc genhtml_legend=1 00:20:46.899 --rc geninfo_all_blocks=1 00:20:46.899 --rc geninfo_unexecuted_blocks=1 00:20:46.899 00:20:46.899 ' 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.899 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:47.159 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:47.159 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:47.160 Cannot find device "nvmf_init_br" 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:47.160 Cannot find device "nvmf_init_br2" 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:47.160 Cannot find device "nvmf_tgt_br" 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:47.160 Cannot find device "nvmf_tgt_br2" 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:47.160 Cannot find device "nvmf_init_br" 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:47.160 Cannot find device "nvmf_init_br2" 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:47.160 Cannot find device "nvmf_tgt_br" 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:47.160 Cannot find device "nvmf_tgt_br2" 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:47.160 Cannot find device "nvmf_br" 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:47.160 Cannot find device "nvmf_init_if" 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:47.160 Cannot find device "nvmf_init_if2" 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:47.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:47.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:47.160 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:47.419 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:47.419 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:47.419 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:47.419 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:47.419 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:47.419 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:47.419 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:47.419 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:47.419 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:47.419 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:47.419 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:47.419 16:56:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:47.419 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:47.420 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:47.420 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:20:47.420 00:20:47.420 --- 10.0.0.3 ping statistics --- 00:20:47.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.420 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:47.420 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:47.420 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:20:47.420 00:20:47.420 --- 10.0.0.4 ping statistics --- 00:20:47.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.420 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:47.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:47.420 00:20:47.420 --- 10.0.0.1 ping statistics --- 00:20:47.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.420 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:47.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:20:47.420 00:20:47.420 --- 10.0.0.2 ping statistics --- 00:20:47.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.420 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:47.420 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:48.000 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:48.000 Waiting for block devices as requested 00:20:48.000 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:48.000 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:48.000 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:48.000 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:48.000 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:48.000 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:48.000 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:48.000 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:48.000 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:48.000 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:48.000 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:48.259 No valid GPT data, bailing 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:48.260 No valid GPT data, bailing 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:48.260 No valid GPT data, bailing 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:48.260 16:56:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:48.260 No valid GPT data, bailing 00:20:48.260 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -a 10.0.0.1 -t tcp -s 4420 00:20:48.520 00:20:48.520 Discovery Log Number of Records 2, Generation counter 2 00:20:48.520 =====Discovery Log Entry 0====== 00:20:48.520 trtype: tcp 00:20:48.520 adrfam: ipv4 00:20:48.520 subtype: current discovery subsystem 00:20:48.520 treq: not specified, sq flow control disable supported 00:20:48.520 portid: 1 00:20:48.520 trsvcid: 4420 00:20:48.520 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:48.520 traddr: 10.0.0.1 00:20:48.520 eflags: none 00:20:48.520 sectype: none 00:20:48.520 =====Discovery Log Entry 1====== 00:20:48.520 trtype: tcp 00:20:48.520 adrfam: ipv4 00:20:48.520 subtype: nvme subsystem 00:20:48.520 treq: not specified, sq flow control disable supported 00:20:48.520 portid: 1 00:20:48.520 trsvcid: 4420 00:20:48.520 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:48.520 traddr: 10.0.0.1 00:20:48.520 eflags: none 00:20:48.520 sectype: none 00:20:48.520 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:48.520 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:48.520 ===================================================== 00:20:48.520 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:48.520 ===================================================== 00:20:48.520 Controller Capabilities/Features 00:20:48.520 ================================ 00:20:48.520 Vendor ID: 0000 00:20:48.520 Subsystem Vendor ID: 0000 00:20:48.520 Serial Number: dc709f05731f27cff2b2 00:20:48.520 Model Number: Linux 00:20:48.520 Firmware Version: 6.8.9-20 00:20:48.520 Recommended Arb Burst: 0 00:20:48.520 IEEE OUI Identifier: 00 00 00 00:20:48.520 Multi-path I/O 00:20:48.520 May have multiple subsystem ports: No 00:20:48.520 May have multiple controllers: No 00:20:48.520 Associated with SR-IOV VF: No 00:20:48.520 Max Data Transfer Size: Unlimited 00:20:48.520 Max Number of Namespaces: 0 00:20:48.520 Max Number of I/O Queues: 1024 00:20:48.520 NVMe Specification Version (VS): 1.3 00:20:48.520 NVMe Specification Version (Identify): 1.3 00:20:48.520 Maximum Queue Entries: 1024 00:20:48.520 Contiguous Queues Required: No 00:20:48.520 Arbitration Mechanisms Supported 00:20:48.520 Weighted Round Robin: Not Supported 00:20:48.520 Vendor Specific: Not Supported 00:20:48.520 Reset Timeout: 7500 ms 00:20:48.520 Doorbell Stride: 4 bytes 00:20:48.520 NVM Subsystem Reset: Not Supported 00:20:48.520 Command Sets Supported 00:20:48.520 NVM Command Set: Supported 00:20:48.520 Boot Partition: Not Supported 00:20:48.520 Memory Page Size Minimum: 4096 bytes 00:20:48.520 Memory Page Size Maximum: 4096 bytes 00:20:48.520 Persistent Memory Region: Not Supported 00:20:48.520 Optional Asynchronous Events Supported 00:20:48.520 Namespace Attribute Notices: Not Supported 00:20:48.520 Firmware Activation Notices: Not Supported 00:20:48.520 ANA Change Notices: Not Supported 00:20:48.520 PLE Aggregate Log Change Notices: Not Supported 00:20:48.520 LBA Status Info Alert Notices: Not Supported 00:20:48.520 EGE Aggregate Log Change Notices: Not Supported 00:20:48.520 Normal NVM Subsystem Shutdown event: Not Supported 00:20:48.520 Zone Descriptor Change Notices: Not Supported 00:20:48.520 Discovery Log Change Notices: Supported 00:20:48.520 Controller Attributes 00:20:48.520 128-bit Host Identifier: Not Supported 00:20:48.520 Non-Operational Permissive Mode: Not Supported 00:20:48.520 NVM Sets: Not Supported 00:20:48.520 Read Recovery Levels: Not Supported 00:20:48.520 Endurance Groups: Not Supported 00:20:48.520 Predictable Latency Mode: Not Supported 00:20:48.520 Traffic Based Keep ALive: Not Supported 00:20:48.520 Namespace Granularity: Not Supported 00:20:48.520 SQ Associations: Not Supported 00:20:48.520 UUID List: Not Supported 00:20:48.520 Multi-Domain Subsystem: Not Supported 00:20:48.520 Fixed Capacity Management: Not Supported 00:20:48.520 Variable Capacity Management: Not Supported 00:20:48.520 Delete Endurance Group: Not Supported 00:20:48.521 Delete NVM Set: Not Supported 00:20:48.521 Extended LBA Formats Supported: Not Supported 00:20:48.521 Flexible Data Placement Supported: Not Supported 00:20:48.521 00:20:48.521 Controller Memory Buffer Support 00:20:48.521 ================================ 00:20:48.521 Supported: No 00:20:48.521 00:20:48.521 Persistent Memory Region Support 00:20:48.521 ================================ 00:20:48.521 Supported: No 00:20:48.521 00:20:48.521 Admin Command Set Attributes 00:20:48.521 ============================ 00:20:48.521 Security Send/Receive: Not Supported 00:20:48.521 Format NVM: Not Supported 00:20:48.521 Firmware Activate/Download: Not Supported 00:20:48.521 Namespace Management: Not Supported 00:20:48.521 Device Self-Test: Not Supported 00:20:48.521 Directives: Not Supported 00:20:48.521 NVMe-MI: Not Supported 00:20:48.521 Virtualization Management: Not Supported 00:20:48.521 Doorbell Buffer Config: Not Supported 00:20:48.521 Get LBA Status Capability: Not Supported 00:20:48.521 Command & Feature Lockdown Capability: Not Supported 00:20:48.521 Abort Command Limit: 1 00:20:48.521 Async Event Request Limit: 1 00:20:48.521 Number of Firmware Slots: N/A 00:20:48.521 Firmware Slot 1 Read-Only: N/A 00:20:48.781 Firmware Activation Without Reset: N/A 00:20:48.781 Multiple Update Detection Support: N/A 00:20:48.781 Firmware Update Granularity: No Information Provided 00:20:48.781 Per-Namespace SMART Log: No 00:20:48.781 Asymmetric Namespace Access Log Page: Not Supported 00:20:48.781 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:48.781 Command Effects Log Page: Not Supported 00:20:48.781 Get Log Page Extended Data: Supported 00:20:48.781 Telemetry Log Pages: Not Supported 00:20:48.781 Persistent Event Log Pages: Not Supported 00:20:48.781 Supported Log Pages Log Page: May Support 00:20:48.781 Commands Supported & Effects Log Page: Not Supported 00:20:48.781 Feature Identifiers & Effects Log Page:May Support 00:20:48.781 NVMe-MI Commands & Effects Log Page: May Support 00:20:48.781 Data Area 4 for Telemetry Log: Not Supported 00:20:48.781 Error Log Page Entries Supported: 1 00:20:48.781 Keep Alive: Not Supported 00:20:48.781 00:20:48.781 NVM Command Set Attributes 00:20:48.781 ========================== 00:20:48.781 Submission Queue Entry Size 00:20:48.781 Max: 1 00:20:48.781 Min: 1 00:20:48.781 Completion Queue Entry Size 00:20:48.781 Max: 1 00:20:48.781 Min: 1 00:20:48.781 Number of Namespaces: 0 00:20:48.781 Compare Command: Not Supported 00:20:48.781 Write Uncorrectable Command: Not Supported 00:20:48.781 Dataset Management Command: Not Supported 00:20:48.781 Write Zeroes Command: Not Supported 00:20:48.781 Set Features Save Field: Not Supported 00:20:48.781 Reservations: Not Supported 00:20:48.781 Timestamp: Not Supported 00:20:48.781 Copy: Not Supported 00:20:48.781 Volatile Write Cache: Not Present 00:20:48.781 Atomic Write Unit (Normal): 1 00:20:48.781 Atomic Write Unit (PFail): 1 00:20:48.781 Atomic Compare & Write Unit: 1 00:20:48.781 Fused Compare & Write: Not Supported 00:20:48.781 Scatter-Gather List 00:20:48.781 SGL Command Set: Supported 00:20:48.781 SGL Keyed: Not Supported 00:20:48.781 SGL Bit Bucket Descriptor: Not Supported 00:20:48.781 SGL Metadata Pointer: Not Supported 00:20:48.781 Oversized SGL: Not Supported 00:20:48.781 SGL Metadata Address: Not Supported 00:20:48.781 SGL Offset: Supported 00:20:48.781 Transport SGL Data Block: Not Supported 00:20:48.781 Replay Protected Memory Block: Not Supported 00:20:48.781 00:20:48.781 Firmware Slot Information 00:20:48.781 ========================= 00:20:48.781 Active slot: 0 00:20:48.781 00:20:48.781 00:20:48.781 Error Log 00:20:48.781 ========= 00:20:48.781 00:20:48.781 Active Namespaces 00:20:48.781 ================= 00:20:48.781 Discovery Log Page 00:20:48.781 ================== 00:20:48.781 Generation Counter: 2 00:20:48.781 Number of Records: 2 00:20:48.781 Record Format: 0 00:20:48.781 00:20:48.781 Discovery Log Entry 0 00:20:48.781 ---------------------- 00:20:48.781 Transport Type: 3 (TCP) 00:20:48.781 Address Family: 1 (IPv4) 00:20:48.781 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:48.781 Entry Flags: 00:20:48.781 Duplicate Returned Information: 0 00:20:48.781 Explicit Persistent Connection Support for Discovery: 0 00:20:48.781 Transport Requirements: 00:20:48.781 Secure Channel: Not Specified 00:20:48.781 Port ID: 1 (0x0001) 00:20:48.782 Controller ID: 65535 (0xffff) 00:20:48.782 Admin Max SQ Size: 32 00:20:48.782 Transport Service Identifier: 4420 00:20:48.782 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:48.782 Transport Address: 10.0.0.1 00:20:48.782 Discovery Log Entry 1 00:20:48.782 ---------------------- 00:20:48.782 Transport Type: 3 (TCP) 00:20:48.782 Address Family: 1 (IPv4) 00:20:48.782 Subsystem Type: 2 (NVM Subsystem) 00:20:48.782 Entry Flags: 00:20:48.782 Duplicate Returned Information: 0 00:20:48.782 Explicit Persistent Connection Support for Discovery: 0 00:20:48.782 Transport Requirements: 00:20:48.782 Secure Channel: Not Specified 00:20:48.782 Port ID: 1 (0x0001) 00:20:48.782 Controller ID: 65535 (0xffff) 00:20:48.782 Admin Max SQ Size: 32 00:20:48.782 Transport Service Identifier: 4420 00:20:48.782 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:48.782 Transport Address: 10.0.0.1 00:20:48.782 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:48.782 get_feature(0x01) failed 00:20:48.782 get_feature(0x02) failed 00:20:48.782 get_feature(0x04) failed 00:20:48.782 ===================================================== 00:20:48.782 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:48.782 ===================================================== 00:20:48.782 Controller Capabilities/Features 00:20:48.782 ================================ 00:20:48.782 Vendor ID: 0000 00:20:48.782 Subsystem Vendor ID: 0000 00:20:48.782 Serial Number: d7f1e9f64cbb75fb9715 00:20:48.782 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:48.782 Firmware Version: 6.8.9-20 00:20:48.782 Recommended Arb Burst: 6 00:20:48.782 IEEE OUI Identifier: 00 00 00 00:20:48.782 Multi-path I/O 00:20:48.782 May have multiple subsystem ports: Yes 00:20:48.782 May have multiple controllers: Yes 00:20:48.782 Associated with SR-IOV VF: No 00:20:48.782 Max Data Transfer Size: Unlimited 00:20:48.782 Max Number of Namespaces: 1024 00:20:48.782 Max Number of I/O Queues: 128 00:20:48.782 NVMe Specification Version (VS): 1.3 00:20:48.782 NVMe Specification Version (Identify): 1.3 00:20:48.782 Maximum Queue Entries: 1024 00:20:48.782 Contiguous Queues Required: No 00:20:48.782 Arbitration Mechanisms Supported 00:20:48.782 Weighted Round Robin: Not Supported 00:20:48.782 Vendor Specific: Not Supported 00:20:48.782 Reset Timeout: 7500 ms 00:20:48.782 Doorbell Stride: 4 bytes 00:20:48.782 NVM Subsystem Reset: Not Supported 00:20:48.782 Command Sets Supported 00:20:48.782 NVM Command Set: Supported 00:20:48.782 Boot Partition: Not Supported 00:20:48.782 Memory Page Size Minimum: 4096 bytes 00:20:48.782 Memory Page Size Maximum: 4096 bytes 00:20:48.782 Persistent Memory Region: Not Supported 00:20:48.782 Optional Asynchronous Events Supported 00:20:48.782 Namespace Attribute Notices: Supported 00:20:48.782 Firmware Activation Notices: Not Supported 00:20:48.782 ANA Change Notices: Supported 00:20:48.782 PLE Aggregate Log Change Notices: Not Supported 00:20:48.782 LBA Status Info Alert Notices: Not Supported 00:20:48.782 EGE Aggregate Log Change Notices: Not Supported 00:20:48.782 Normal NVM Subsystem Shutdown event: Not Supported 00:20:48.782 Zone Descriptor Change Notices: Not Supported 00:20:48.782 Discovery Log Change Notices: Not Supported 00:20:48.782 Controller Attributes 00:20:48.782 128-bit Host Identifier: Supported 00:20:48.782 Non-Operational Permissive Mode: Not Supported 00:20:48.782 NVM Sets: Not Supported 00:20:48.782 Read Recovery Levels: Not Supported 00:20:48.782 Endurance Groups: Not Supported 00:20:48.782 Predictable Latency Mode: Not Supported 00:20:48.782 Traffic Based Keep ALive: Supported 00:20:48.782 Namespace Granularity: Not Supported 00:20:48.782 SQ Associations: Not Supported 00:20:48.782 UUID List: Not Supported 00:20:48.782 Multi-Domain Subsystem: Not Supported 00:20:48.782 Fixed Capacity Management: Not Supported 00:20:48.782 Variable Capacity Management: Not Supported 00:20:48.782 Delete Endurance Group: Not Supported 00:20:48.782 Delete NVM Set: Not Supported 00:20:48.782 Extended LBA Formats Supported: Not Supported 00:20:48.782 Flexible Data Placement Supported: Not Supported 00:20:48.782 00:20:48.782 Controller Memory Buffer Support 00:20:48.782 ================================ 00:20:48.782 Supported: No 00:20:48.782 00:20:48.782 Persistent Memory Region Support 00:20:48.782 ================================ 00:20:48.782 Supported: No 00:20:48.782 00:20:48.782 Admin Command Set Attributes 00:20:48.782 ============================ 00:20:48.782 Security Send/Receive: Not Supported 00:20:48.782 Format NVM: Not Supported 00:20:48.782 Firmware Activate/Download: Not Supported 00:20:48.782 Namespace Management: Not Supported 00:20:48.782 Device Self-Test: Not Supported 00:20:48.782 Directives: Not Supported 00:20:48.782 NVMe-MI: Not Supported 00:20:48.782 Virtualization Management: Not Supported 00:20:48.782 Doorbell Buffer Config: Not Supported 00:20:48.782 Get LBA Status Capability: Not Supported 00:20:48.782 Command & Feature Lockdown Capability: Not Supported 00:20:48.782 Abort Command Limit: 4 00:20:48.782 Async Event Request Limit: 4 00:20:48.782 Number of Firmware Slots: N/A 00:20:48.782 Firmware Slot 1 Read-Only: N/A 00:20:48.782 Firmware Activation Without Reset: N/A 00:20:48.782 Multiple Update Detection Support: N/A 00:20:48.782 Firmware Update Granularity: No Information Provided 00:20:48.782 Per-Namespace SMART Log: Yes 00:20:48.782 Asymmetric Namespace Access Log Page: Supported 00:20:48.782 ANA Transition Time : 10 sec 00:20:48.782 00:20:48.782 Asymmetric Namespace Access Capabilities 00:20:48.782 ANA Optimized State : Supported 00:20:48.782 ANA Non-Optimized State : Supported 00:20:48.782 ANA Inaccessible State : Supported 00:20:48.782 ANA Persistent Loss State : Supported 00:20:48.782 ANA Change State : Supported 00:20:48.782 ANAGRPID is not changed : No 00:20:48.782 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:48.782 00:20:48.782 ANA Group Identifier Maximum : 128 00:20:48.782 Number of ANA Group Identifiers : 128 00:20:48.782 Max Number of Allowed Namespaces : 1024 00:20:48.782 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:48.782 Command Effects Log Page: Supported 00:20:48.782 Get Log Page Extended Data: Supported 00:20:48.782 Telemetry Log Pages: Not Supported 00:20:48.783 Persistent Event Log Pages: Not Supported 00:20:48.783 Supported Log Pages Log Page: May Support 00:20:48.783 Commands Supported & Effects Log Page: Not Supported 00:20:48.783 Feature Identifiers & Effects Log Page:May Support 00:20:48.783 NVMe-MI Commands & Effects Log Page: May Support 00:20:48.783 Data Area 4 for Telemetry Log: Not Supported 00:20:48.783 Error Log Page Entries Supported: 128 00:20:48.783 Keep Alive: Supported 00:20:48.783 Keep Alive Granularity: 1000 ms 00:20:48.783 00:20:48.783 NVM Command Set Attributes 00:20:48.783 ========================== 00:20:48.783 Submission Queue Entry Size 00:20:48.783 Max: 64 00:20:48.783 Min: 64 00:20:48.783 Completion Queue Entry Size 00:20:48.783 Max: 16 00:20:48.783 Min: 16 00:20:48.783 Number of Namespaces: 1024 00:20:48.783 Compare Command: Not Supported 00:20:48.783 Write Uncorrectable Command: Not Supported 00:20:48.783 Dataset Management Command: Supported 00:20:48.783 Write Zeroes Command: Supported 00:20:48.783 Set Features Save Field: Not Supported 00:20:48.783 Reservations: Not Supported 00:20:48.783 Timestamp: Not Supported 00:20:48.783 Copy: Not Supported 00:20:48.783 Volatile Write Cache: Present 00:20:48.783 Atomic Write Unit (Normal): 1 00:20:48.783 Atomic Write Unit (PFail): 1 00:20:48.783 Atomic Compare & Write Unit: 1 00:20:48.783 Fused Compare & Write: Not Supported 00:20:48.783 Scatter-Gather List 00:20:48.783 SGL Command Set: Supported 00:20:48.783 SGL Keyed: Not Supported 00:20:48.783 SGL Bit Bucket Descriptor: Not Supported 00:20:48.783 SGL Metadata Pointer: Not Supported 00:20:48.783 Oversized SGL: Not Supported 00:20:48.783 SGL Metadata Address: Not Supported 00:20:48.783 SGL Offset: Supported 00:20:48.783 Transport SGL Data Block: Not Supported 00:20:48.783 Replay Protected Memory Block: Not Supported 00:20:48.783 00:20:48.783 Firmware Slot Information 00:20:48.783 ========================= 00:20:48.783 Active slot: 0 00:20:48.783 00:20:48.783 Asymmetric Namespace Access 00:20:48.783 =========================== 00:20:48.783 Change Count : 0 00:20:48.783 Number of ANA Group Descriptors : 1 00:20:48.783 ANA Group Descriptor : 0 00:20:48.783 ANA Group ID : 1 00:20:48.783 Number of NSID Values : 1 00:20:48.783 Change Count : 0 00:20:48.783 ANA State : 1 00:20:48.783 Namespace Identifier : 1 00:20:48.783 00:20:48.783 Commands Supported and Effects 00:20:48.783 ============================== 00:20:48.783 Admin Commands 00:20:48.783 -------------- 00:20:48.783 Get Log Page (02h): Supported 00:20:48.783 Identify (06h): Supported 00:20:48.783 Abort (08h): Supported 00:20:48.783 Set Features (09h): Supported 00:20:48.783 Get Features (0Ah): Supported 00:20:48.783 Asynchronous Event Request (0Ch): Supported 00:20:48.783 Keep Alive (18h): Supported 00:20:48.783 I/O Commands 00:20:48.783 ------------ 00:20:48.783 Flush (00h): Supported 00:20:48.783 Write (01h): Supported LBA-Change 00:20:48.783 Read (02h): Supported 00:20:48.783 Write Zeroes (08h): Supported LBA-Change 00:20:48.783 Dataset Management (09h): Supported 00:20:48.783 00:20:48.783 Error Log 00:20:48.783 ========= 00:20:48.783 Entry: 0 00:20:48.783 Error Count: 0x3 00:20:48.783 Submission Queue Id: 0x0 00:20:48.783 Command Id: 0x5 00:20:48.783 Phase Bit: 0 00:20:48.783 Status Code: 0x2 00:20:48.783 Status Code Type: 0x0 00:20:48.783 Do Not Retry: 1 00:20:48.783 Error Location: 0x28 00:20:48.783 LBA: 0x0 00:20:48.783 Namespace: 0x0 00:20:48.783 Vendor Log Page: 0x0 00:20:48.783 ----------- 00:20:48.783 Entry: 1 00:20:48.783 Error Count: 0x2 00:20:48.783 Submission Queue Id: 0x0 00:20:48.783 Command Id: 0x5 00:20:48.783 Phase Bit: 0 00:20:48.783 Status Code: 0x2 00:20:48.783 Status Code Type: 0x0 00:20:48.783 Do Not Retry: 1 00:20:48.783 Error Location: 0x28 00:20:48.783 LBA: 0x0 00:20:48.783 Namespace: 0x0 00:20:48.783 Vendor Log Page: 0x0 00:20:48.783 ----------- 00:20:48.783 Entry: 2 00:20:48.783 Error Count: 0x1 00:20:48.783 Submission Queue Id: 0x0 00:20:48.783 Command Id: 0x4 00:20:48.783 Phase Bit: 0 00:20:48.783 Status Code: 0x2 00:20:48.783 Status Code Type: 0x0 00:20:48.783 Do Not Retry: 1 00:20:48.783 Error Location: 0x28 00:20:48.783 LBA: 0x0 00:20:48.783 Namespace: 0x0 00:20:48.783 Vendor Log Page: 0x0 00:20:48.783 00:20:48.783 Number of Queues 00:20:48.783 ================ 00:20:48.783 Number of I/O Submission Queues: 128 00:20:48.783 Number of I/O Completion Queues: 128 00:20:48.783 00:20:48.783 ZNS Specific Controller Data 00:20:48.783 ============================ 00:20:48.783 Zone Append Size Limit: 0 00:20:48.783 00:20:48.783 00:20:48.783 Active Namespaces 00:20:48.783 ================= 00:20:48.783 get_feature(0x05) failed 00:20:48.783 Namespace ID:1 00:20:48.783 Command Set Identifier: NVM (00h) 00:20:48.783 Deallocate: Supported 00:20:48.783 Deallocated/Unwritten Error: Not Supported 00:20:48.783 Deallocated Read Value: Unknown 00:20:48.783 Deallocate in Write Zeroes: Not Supported 00:20:48.783 Deallocated Guard Field: 0xFFFF 00:20:48.783 Flush: Supported 00:20:48.783 Reservation: Not Supported 00:20:48.783 Namespace Sharing Capabilities: Multiple Controllers 00:20:48.783 Size (in LBAs): 1310720 (5GiB) 00:20:48.783 Capacity (in LBAs): 1310720 (5GiB) 00:20:48.783 Utilization (in LBAs): 1310720 (5GiB) 00:20:48.783 UUID: 7fa0fc78-9068-4ad8-8853-9bad77e7e363 00:20:48.783 Thin Provisioning: Not Supported 00:20:48.783 Per-NS Atomic Units: Yes 00:20:48.783 Atomic Boundary Size (Normal): 0 00:20:48.783 Atomic Boundary Size (PFail): 0 00:20:48.783 Atomic Boundary Offset: 0 00:20:48.783 NGUID/EUI64 Never Reused: No 00:20:48.783 ANA group ID: 1 00:20:48.783 Namespace Write Protected: No 00:20:48.783 Number of LBA Formats: 1 00:20:48.783 Current LBA Format: LBA Format #00 00:20:48.783 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:48.783 00:20:48.783 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:48.783 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:48.783 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:48.783 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:48.784 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:48.784 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:48.784 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:48.784 rmmod nvme_tcp 00:20:49.044 rmmod nvme_fabrics 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:49.044 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:20:49.304 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:49.304 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:49.304 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:49.304 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:49.304 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:49.304 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:49.304 16:56:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:49.872 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:49.872 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:50.132 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:50.132 ************************************ 00:20:50.132 END TEST nvmf_identify_kernel_target 00:20:50.132 ************************************ 00:20:50.132 00:20:50.132 real 0m3.248s 00:20:50.132 user 0m1.167s 00:20:50.132 sys 0m1.493s 00:20:50.132 16:56:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.132 16:56:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.132 16:56:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:50.132 16:56:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:50.132 16:56:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.132 16:56:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.132 ************************************ 00:20:50.132 START TEST nvmf_auth_host 00:20:50.132 ************************************ 00:20:50.132 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:50.132 * Looking for test storage... 00:20:50.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:50.132 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:50.132 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:50.132 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.393 16:56:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:50.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.393 --rc genhtml_branch_coverage=1 00:20:50.393 --rc genhtml_function_coverage=1 00:20:50.393 --rc genhtml_legend=1 00:20:50.393 --rc geninfo_all_blocks=1 00:20:50.393 --rc geninfo_unexecuted_blocks=1 00:20:50.393 00:20:50.393 ' 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:50.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.393 --rc genhtml_branch_coverage=1 00:20:50.393 --rc genhtml_function_coverage=1 00:20:50.393 --rc genhtml_legend=1 00:20:50.393 --rc geninfo_all_blocks=1 00:20:50.393 --rc geninfo_unexecuted_blocks=1 00:20:50.393 00:20:50.393 ' 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:50.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.393 --rc genhtml_branch_coverage=1 00:20:50.393 --rc genhtml_function_coverage=1 00:20:50.393 --rc genhtml_legend=1 00:20:50.393 --rc geninfo_all_blocks=1 00:20:50.393 --rc geninfo_unexecuted_blocks=1 00:20:50.393 00:20:50.393 ' 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:50.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.393 --rc genhtml_branch_coverage=1 00:20:50.393 --rc genhtml_function_coverage=1 00:20:50.393 --rc genhtml_legend=1 00:20:50.393 --rc geninfo_all_blocks=1 00:20:50.393 --rc geninfo_unexecuted_blocks=1 00:20:50.393 00:20:50.393 ' 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.393 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:50.394 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:50.394 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:50.395 Cannot find device "nvmf_init_br" 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:50.395 Cannot find device "nvmf_init_br2" 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:50.395 Cannot find device "nvmf_tgt_br" 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.395 Cannot find device "nvmf_tgt_br2" 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:50.395 Cannot find device "nvmf_init_br" 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:50.395 Cannot find device "nvmf_init_br2" 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:50.395 Cannot find device "nvmf_tgt_br" 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:50.395 Cannot find device "nvmf_tgt_br2" 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:50.395 Cannot find device "nvmf_br" 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:50.395 Cannot find device "nvmf_init_if" 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:50.395 Cannot find device "nvmf_init_if2" 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:50.395 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:50.655 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:50.655 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:20:50.655 00:20:50.655 --- 10.0.0.3 ping statistics --- 00:20:50.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.655 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:50.655 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:50.655 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:20:50.655 00:20:50.655 --- 10.0.0.4 ping statistics --- 00:20:50.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.655 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:50.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:50.655 00:20:50.655 --- 10.0.0.1 ping statistics --- 00:20:50.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.655 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:50.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:20:50.655 00:20:50.655 --- 10.0.0.2 ping statistics --- 00:20:50.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.655 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=94908 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 94908 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 94908 ']' 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.655 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8bead348307c5d1738211f14d01a6da1 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5uB 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8bead348307c5d1738211f14d01a6da1 0 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8bead348307c5d1738211f14d01a6da1 0 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8bead348307c5d1738211f14d01a6da1 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5uB 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5uB 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5uB 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6fb475c4c470c1e7a70c62fea52067ba40ab4592b1fd4cd3b7f9a8bd66cd98e8 00:20:51.224 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.LIg 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6fb475c4c470c1e7a70c62fea52067ba40ab4592b1fd4cd3b7f9a8bd66cd98e8 3 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6fb475c4c470c1e7a70c62fea52067ba40ab4592b1fd4cd3b7f9a8bd66cd98e8 3 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6fb475c4c470c1e7a70c62fea52067ba40ab4592b1fd4cd3b7f9a8bd66cd98e8 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.LIg 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.LIg 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.LIg 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ca2ded7acf62586d78b72a3d7c4cd4304751b804caf1d2de 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ZO9 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ca2ded7acf62586d78b72a3d7c4cd4304751b804caf1d2de 0 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ca2ded7acf62586d78b72a3d7c4cd4304751b804caf1d2de 0 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ca2ded7acf62586d78b72a3d7c4cd4304751b804caf1d2de 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:51.225 16:56:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:51.225 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ZO9 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ZO9 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ZO9 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e2b8110ed9ff13753d67deb3352f55097c30c7d8b590e979 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.wfm 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e2b8110ed9ff13753d67deb3352f55097c30c7d8b590e979 2 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e2b8110ed9ff13753d67deb3352f55097c30c7d8b590e979 2 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e2b8110ed9ff13753d67deb3352f55097c30c7d8b590e979 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.wfm 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.wfm 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.wfm 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=be7bee4f9594584b28d3d942e21773e3 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.lLM 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key be7bee4f9594584b28d3d942e21773e3 1 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 be7bee4f9594584b28d3d942e21773e3 1 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=be7bee4f9594584b28d3d942e21773e3 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.lLM 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.lLM 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.lLM 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4d1e849796f0810fb1395b30a09f0fa7 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9tE 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4d1e849796f0810fb1395b30a09f0fa7 1 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4d1e849796f0810fb1395b30a09f0fa7 1 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4d1e849796f0810fb1395b30a09f0fa7 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9tE 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9tE 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9tE 00:20:51.489 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d32f7016c99585dc599474f7168f17100a33ddecc3e47267 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.bn3 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d32f7016c99585dc599474f7168f17100a33ddecc3e47267 2 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d32f7016c99585dc599474f7168f17100a33ddecc3e47267 2 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d32f7016c99585dc599474f7168f17100a33ddecc3e47267 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.bn3 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.bn3 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.bn3 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:51.490 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=93a8de4e7015ff0228269f422fa72317 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3xI 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 93a8de4e7015ff0228269f422fa72317 0 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 93a8de4e7015ff0228269f422fa72317 0 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=93a8de4e7015ff0228269f422fa72317 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3xI 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3xI 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3xI 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8a399af0c98c4163a5ca2ae154478b0ac435c79dd4aace1b12e98363e7e1371f 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.YIN 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8a399af0c98c4163a5ca2ae154478b0ac435c79dd4aace1b12e98363e7e1371f 3 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8a399af0c98c4163a5ca2ae154478b0ac435c79dd4aace1b12e98363e7e1371f 3 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8a399af0c98c4163a5ca2ae154478b0ac435c79dd4aace1b12e98363e7e1371f 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.YIN 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.YIN 00:20:51.748 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.YIN 00:20:51.749 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:51.749 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 94908 00:20:51.749 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 94908 ']' 00:20:51.749 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.749 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.749 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.749 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.749 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5uB 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.LIg ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LIg 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ZO9 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.wfm ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wfm 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.lLM 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9tE ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9tE 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.bn3 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3xI ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3xI 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.YIN 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:52.008 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:52.267 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:52.267 16:56:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:52.526 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:52.526 Waiting for block devices as requested 00:20:52.526 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:52.785 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:53.353 No valid GPT data, bailing 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:53.353 No valid GPT data, bailing 00:20:53.353 16:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:53.353 No valid GPT data, bailing 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:53.353 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:53.612 No valid GPT data, bailing 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:20:53.612 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -a 10.0.0.1 -t tcp -s 4420 00:20:53.613 00:20:53.613 Discovery Log Number of Records 2, Generation counter 2 00:20:53.613 =====Discovery Log Entry 0====== 00:20:53.613 trtype: tcp 00:20:53.613 adrfam: ipv4 00:20:53.613 subtype: current discovery subsystem 00:20:53.613 treq: not specified, sq flow control disable supported 00:20:53.613 portid: 1 00:20:53.613 trsvcid: 4420 00:20:53.613 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:53.613 traddr: 10.0.0.1 00:20:53.613 eflags: none 00:20:53.613 sectype: none 00:20:53.613 =====Discovery Log Entry 1====== 00:20:53.613 trtype: tcp 00:20:53.613 adrfam: ipv4 00:20:53.613 subtype: nvme subsystem 00:20:53.613 treq: not specified, sq flow control disable supported 00:20:53.613 portid: 1 00:20:53.613 trsvcid: 4420 00:20:53.613 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:53.613 traddr: 10.0.0.1 00:20:53.613 eflags: none 00:20:53.613 sectype: none 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.613 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.872 nvme0n1 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:53.872 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.873 nvme0n1 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.873 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.133 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.133 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.133 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.133 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.133 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.133 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.133 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.133 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:54.133 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.133 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.133 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:54.133 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:54.133 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:20:54.133 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.134 nvme0n1 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.134 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.135 16:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.394 nvme0n1 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.394 nvme0n1 00:20:54.394 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.654 nvme0n1 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.654 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.225 nvme0n1 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.225 16:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.485 nvme0n1 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:55.485 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:55.486 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.486 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.486 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.486 nvme0n1 00:20:55.486 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.486 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.486 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.486 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.486 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.486 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.745 nvme0n1 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.745 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:55.746 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:56.005 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.006 nvme0n1 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:56.006 16:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.575 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.835 nvme0n1 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.835 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.095 nvme0n1 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.095 16:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.354 nvme0n1 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.354 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.355 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:57.355 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.355 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:57.355 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:57.355 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:57.355 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:57.355 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.355 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.614 nvme0n1 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.614 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.872 nvme0n1 00:20:57.872 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.872 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.872 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.872 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.872 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.872 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.872 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.872 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.872 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.872 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.872 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.873 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.873 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.873 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:57.873 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.873 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:57.873 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:57.873 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:57.873 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:20:57.873 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:20:57.873 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:57.873 16:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:59.351 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.610 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.610 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.870 nvme0n1 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.870 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.130 nvme0n1 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.130 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.389 16:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.648 nvme0n1 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.648 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.908 nvme0n1 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:00.908 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.167 16:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.426 nvme0n1 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:01.426 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.427 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.994 nvme0n1 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:01.994 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:01.995 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:01.995 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.995 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.995 16:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.562 nvme0n1 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:02.562 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.563 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.130 nvme0n1 00:21:03.130 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.130 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.130 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.130 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.130 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.130 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.389 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.390 16:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.984 nvme0n1 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.984 16:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.553 nvme0n1 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.553 nvme0n1 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.553 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:04.813 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.814 nvme0n1 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.814 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.073 nvme0n1 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.074 nvme0n1 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.074 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.335 16:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.335 nvme0n1 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:05.335 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.336 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.596 nvme0n1 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.596 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.855 nvme0n1 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.856 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.115 nvme0n1 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.115 nvme0n1 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.115 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.375 16:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.375 nvme0n1 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.375 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.376 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.376 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.376 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.376 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.376 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.376 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.376 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.376 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.376 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.376 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.376 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.376 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.635 nvme0n1 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.635 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.895 nvme0n1 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:06.895 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.896 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.155 nvme0n1 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.155 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.156 16:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.415 nvme0n1 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.415 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.416 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.416 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.416 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.416 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.416 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.416 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.416 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:07.416 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.416 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.676 nvme0n1 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.676 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.935 nvme0n1 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.195 16:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.455 nvme0n1 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.455 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.715 nvme0n1 00:21:08.715 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.715 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.715 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.715 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.715 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.974 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.234 nvme0n1 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.234 16:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.493 nvme0n1 00:21:09.493 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.493 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.493 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.493 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.493 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.493 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.752 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.320 nvme0n1 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:10.320 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.321 16:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.888 nvme0n1 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.888 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.889 16:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.454 nvme0n1 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.454 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.020 nvme0n1 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.020 16:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.588 nvme0n1 00:21:12.588 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.588 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.588 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.588 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.588 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.588 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.588 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.588 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.588 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.588 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.847 nvme0n1 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.847 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.848 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.106 nvme0n1 00:21:13.106 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.106 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.106 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.106 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.106 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.106 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.106 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.106 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.106 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.106 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.106 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.107 nvme0n1 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.107 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.366 16:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.366 nvme0n1 00:21:13.366 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.367 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.626 nvme0n1 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.626 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.885 nvme0n1 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.885 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.143 nvme0n1 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.143 nvme0n1 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.143 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.401 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.401 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.401 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.401 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.401 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.401 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.401 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:14.401 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.401 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.401 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:14.401 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:14.401 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.402 16:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.402 nvme0n1 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.402 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.661 nvme0n1 00:21:14.661 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.661 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.661 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.661 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.661 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.661 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.661 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.662 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.920 nvme0n1 00:21:14.920 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.920 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.920 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.920 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.920 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.920 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.920 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.920 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.920 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.920 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.921 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.179 nvme0n1 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.179 16:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.438 nvme0n1 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.438 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.439 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.698 nvme0n1 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.698 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.957 nvme0n1 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:15.957 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.958 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.216 nvme0n1 00:21:16.216 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.216 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.216 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.216 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.216 16:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.475 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.476 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.735 nvme0n1 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.735 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.302 nvme0n1 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.302 16:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.560 nvme0n1 00:21:17.560 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.560 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.560 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.560 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.560 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.560 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.560 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.560 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.561 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.819 nvme0n1 00:21:17.819 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.819 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.819 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.819 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.819 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.819 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.078 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.078 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.078 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.078 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.078 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.078 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.078 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.078 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJlYWQzNDgzMDdjNWQxNzM4MjExZjE0ZDAxYTZkYTF6hLQA: 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: ]] 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmZiNDc1YzRjNDcwYzFlN2E3MGM2MmZlYTUyMDY3YmE0MGFiNDU5MmIxZmQ0Y2QzYjdmOWE4YmQ2NmNkOThlOE5SLUo=: 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.079 16:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.646 nvme0n1 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:18.646 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.647 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.215 nvme0n1 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.215 16:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.782 nvme0n1 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyZjcwMTZjOTk1ODVkYzU5OTQ3NGY3MTY4ZjE3MTAwYTMzZGRlY2MzZTQ3MjY34j54Bg==: 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: ]] 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhOGRlNGU3MDE1ZmYwMjI4MjY5ZjQyMmZhNzIzMTc+w6vZ: 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.782 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.350 nvme0n1 00:21:20.350 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.350 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:20.350 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.350 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:20.350 16:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGEzOTlhZjBjOThjNDE2M2E1Y2EyYWUxNTQ0NzhiMGFjNDM1Yzc5ZGQ0YWFjZTFiMTJlOTgzNjNlN2UxMzcxZl8mMSQ=: 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.350 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.917 nvme0n1 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.917 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.176 request: 00:21:21.176 { 00:21:21.176 "name": "nvme0", 00:21:21.176 "trtype": "tcp", 00:21:21.176 "traddr": "10.0.0.1", 00:21:21.176 "adrfam": "ipv4", 00:21:21.176 "trsvcid": "4420", 00:21:21.176 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:21.176 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:21.176 "prchk_reftag": false, 00:21:21.176 "prchk_guard": false, 00:21:21.176 "hdgst": false, 00:21:21.176 "ddgst": false, 00:21:21.176 "allow_unrecognized_csi": false, 00:21:21.176 "method": "bdev_nvme_attach_controller", 00:21:21.176 "req_id": 1 00:21:21.176 } 00:21:21.176 Got JSON-RPC error response 00:21:21.176 response: 00:21:21.176 { 00:21:21.176 "code": -5, 00:21:21.176 "message": "Input/output error" 00:21:21.176 } 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.176 request: 00:21:21.176 { 00:21:21.176 "name": "nvme0", 00:21:21.176 "trtype": "tcp", 00:21:21.176 "traddr": "10.0.0.1", 00:21:21.176 "adrfam": "ipv4", 00:21:21.176 "trsvcid": "4420", 00:21:21.176 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:21.176 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:21.176 "prchk_reftag": false, 00:21:21.176 "prchk_guard": false, 00:21:21.176 "hdgst": false, 00:21:21.176 "ddgst": false, 00:21:21.176 "dhchap_key": "key2", 00:21:21.176 "allow_unrecognized_csi": false, 00:21:21.176 "method": "bdev_nvme_attach_controller", 00:21:21.176 "req_id": 1 00:21:21.176 } 00:21:21.176 Got JSON-RPC error response 00:21:21.176 response: 00:21:21.176 { 00:21:21.176 "code": -5, 00:21:21.176 "message": "Input/output error" 00:21:21.176 } 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.176 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.177 request: 00:21:21.177 { 00:21:21.177 "name": "nvme0", 00:21:21.177 "trtype": "tcp", 00:21:21.177 "traddr": "10.0.0.1", 00:21:21.177 "adrfam": "ipv4", 00:21:21.177 "trsvcid": "4420", 00:21:21.177 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:21.177 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:21.177 "prchk_reftag": false, 00:21:21.177 "prchk_guard": false, 00:21:21.177 "hdgst": false, 00:21:21.177 "ddgst": false, 00:21:21.177 "dhchap_key": "key1", 00:21:21.177 "dhchap_ctrlr_key": "ckey2", 00:21:21.177 "allow_unrecognized_csi": false, 00:21:21.177 "method": "bdev_nvme_attach_controller", 00:21:21.177 "req_id": 1 00:21:21.177 } 00:21:21.177 Got JSON-RPC error response 00:21:21.177 response: 00:21:21.177 { 00:21:21.177 "code": -5, 00:21:21.177 "message": "Input/output error" 00:21:21.177 } 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.177 16:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.435 nvme0n1 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.435 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.436 request: 00:21:21.436 { 00:21:21.436 "name": "nvme0", 00:21:21.436 "dhchap_key": "key1", 00:21:21.436 "dhchap_ctrlr_key": "ckey2", 00:21:21.436 "method": "bdev_nvme_set_keys", 00:21:21.436 "req_id": 1 00:21:21.436 } 00:21:21.436 Got JSON-RPC error response 00:21:21.436 response: 00:21:21.436 { 00:21:21.436 "code": -13, 00:21:21.436 "message": "Permission denied" 00:21:21.436 } 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:21.436 16:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2EyZGVkN2FjZjYyNTg2ZDc4YjcyYTNkN2M0Y2Q0MzA0NzUxYjgwNGNhZjFkMmRlUIqU4Q==: 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: ]] 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTJiODExMGVkOWZmMTM3NTNkNjdkZWIzMzUyZjU1MDk3YzMwYzdkOGI1OTBlOTc51pCZmw==: 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:22.809 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.810 nvme0n1 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU3YmVlNGY5NTk0NTg0YjI4ZDNkOTQyZTIxNzczZTObHEne: 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: ]] 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQxZTg0OTc5NmYwODEwZmIxMzk1YjMwYTA5ZjBmYTeGBl7p: 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.810 request: 00:21:22.810 { 00:21:22.810 "name": "nvme0", 00:21:22.810 "dhchap_key": "key2", 00:21:22.810 "dhchap_ctrlr_key": "ckey1", 00:21:22.810 "method": "bdev_nvme_set_keys", 00:21:22.810 "req_id": 1 00:21:22.810 } 00:21:22.810 Got JSON-RPC error response 00:21:22.810 response: 00:21:22.810 { 00:21:22.810 "code": -13, 00:21:22.810 "message": "Permission denied" 00:21:22.810 } 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:22.810 16:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:23.746 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:23.746 rmmod nvme_tcp 00:21:24.006 rmmod nvme_fabrics 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 94908 ']' 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 94908 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 94908 ']' 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 94908 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94908 00:21:24.006 killing process with pid 94908 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94908' 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 94908 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 94908 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:24.006 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:24.265 16:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:24.876 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:25.157 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:25.157 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:25.157 16:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5uB /tmp/spdk.key-null.ZO9 /tmp/spdk.key-sha256.lLM /tmp/spdk.key-sha384.bn3 /tmp/spdk.key-sha512.YIN /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:25.157 16:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:25.732 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:25.732 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:25.732 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:25.732 00:21:25.732 real 0m35.502s 00:21:25.732 user 0m32.397s 00:21:25.732 sys 0m3.754s 00:21:25.732 16:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.732 16:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.732 ************************************ 00:21:25.732 END TEST nvmf_auth_host 00:21:25.732 ************************************ 00:21:25.732 16:56:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:21:25.732 16:56:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:25.732 16:56:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:25.732 16:56:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.732 16:56:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.732 ************************************ 00:21:25.732 START TEST nvmf_digest 00:21:25.732 ************************************ 00:21:25.732 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:25.732 * Looking for test storage... 00:21:25.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:25.732 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:25.732 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:21:25.732 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:25.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.993 --rc genhtml_branch_coverage=1 00:21:25.993 --rc genhtml_function_coverage=1 00:21:25.993 --rc genhtml_legend=1 00:21:25.993 --rc geninfo_all_blocks=1 00:21:25.993 --rc geninfo_unexecuted_blocks=1 00:21:25.993 00:21:25.993 ' 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:25.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.993 --rc genhtml_branch_coverage=1 00:21:25.993 --rc genhtml_function_coverage=1 00:21:25.993 --rc genhtml_legend=1 00:21:25.993 --rc geninfo_all_blocks=1 00:21:25.993 --rc geninfo_unexecuted_blocks=1 00:21:25.993 00:21:25.993 ' 00:21:25.993 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:25.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.994 --rc genhtml_branch_coverage=1 00:21:25.994 --rc genhtml_function_coverage=1 00:21:25.994 --rc genhtml_legend=1 00:21:25.994 --rc geninfo_all_blocks=1 00:21:25.994 --rc geninfo_unexecuted_blocks=1 00:21:25.994 00:21:25.994 ' 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:25.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.994 --rc genhtml_branch_coverage=1 00:21:25.994 --rc genhtml_function_coverage=1 00:21:25.994 --rc genhtml_legend=1 00:21:25.994 --rc geninfo_all_blocks=1 00:21:25.994 --rc geninfo_unexecuted_blocks=1 00:21:25.994 00:21:25.994 ' 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:25.994 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:25.994 Cannot find device "nvmf_init_br" 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:25.994 Cannot find device "nvmf_init_br2" 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:25.994 Cannot find device "nvmf_tgt_br" 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:25.994 Cannot find device "nvmf_tgt_br2" 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:25.994 Cannot find device "nvmf_init_br" 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:25.994 Cannot find device "nvmf_init_br2" 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:25.994 Cannot find device "nvmf_tgt_br" 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:25.994 Cannot find device "nvmf_tgt_br2" 00:21:25.994 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:21:25.995 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:25.995 Cannot find device "nvmf_br" 00:21:25.995 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:21:25.995 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:25.995 Cannot find device "nvmf_init_if" 00:21:25.995 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:21:25.995 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:25.995 Cannot find device "nvmf_init_if2" 00:21:25.995 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:21:25.995 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:25.995 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:25.995 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:21:25.995 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:25.995 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:25.995 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:21:25.995 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:25.995 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:26.253 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:26.254 16:56:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:26.254 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:26.254 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:21:26.254 00:21:26.254 --- 10.0.0.3 ping statistics --- 00:21:26.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.254 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:26.254 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:26.254 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:21:26.254 00:21:26.254 --- 10.0.0.4 ping statistics --- 00:21:26.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.254 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:26.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:21:26.254 00:21:26.254 --- 10.0.0.1 ping statistics --- 00:21:26.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.254 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:26.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:21:26.254 00:21:26.254 --- 10.0.0.2 ping statistics --- 00:21:26.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.254 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:26.254 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:26.513 ************************************ 00:21:26.513 START TEST nvmf_digest_clean 00:21:26.513 ************************************ 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=96539 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 96539 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 96539 ']' 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.513 16:56:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:26.513 [2024-11-29 16:56:50.140787] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:26.513 [2024-11-29 16:56:50.140878] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.513 [2024-11-29 16:56:50.268527] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:26.513 [2024-11-29 16:56:50.301167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.772 [2024-11-29 16:56:50.322683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.772 [2024-11-29 16:56:50.322740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.772 [2024-11-29 16:56:50.322753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.772 [2024-11-29 16:56:50.322763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.772 [2024-11-29 16:56:50.322772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.772 [2024-11-29 16:56:50.323132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.339 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.339 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:27.339 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:27.340 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:27.340 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:27.598 [2024-11-29 16:56:51.165435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:27.598 null0 00:21:27.598 [2024-11-29 16:56:51.197878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.598 [2024-11-29 16:56:51.221984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96571 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96571 /var/tmp/bperf.sock 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 96571 ']' 00:21:27.598 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:27.599 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.599 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:27.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:27.599 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.599 16:56:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:27.599 [2024-11-29 16:56:51.288292] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:27.599 [2024-11-29 16:56:51.288395] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96571 ] 00:21:27.857 [2024-11-29 16:56:51.415034] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:27.857 [2024-11-29 16:56:51.447396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.857 [2024-11-29 16:56:51.472358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.425 16:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.425 16:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:28.425 16:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:28.425 16:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:28.425 16:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:28.684 [2024-11-29 16:56:52.417228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:28.684 16:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:28.684 16:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:29.251 nvme0n1 00:21:29.251 16:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:29.251 16:56:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:29.251 Running I/O for 2 seconds... 00:21:31.123 17653.00 IOPS, 68.96 MiB/s [2024-11-29T16:56:54.915Z] 17716.50 IOPS, 69.21 MiB/s 00:21:31.123 Latency(us) 00:21:31.123 [2024-11-29T16:56:54.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.123 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:31.123 nvme0n1 : 2.01 17754.66 69.35 0.00 0.00 7203.97 6702.55 16443.58 00:21:31.123 [2024-11-29T16:56:54.915Z] =================================================================================================================== 00:21:31.123 [2024-11-29T16:56:54.915Z] Total : 17754.66 69.35 0.00 0.00 7203.97 6702.55 16443.58 00:21:31.123 { 00:21:31.123 "results": [ 00:21:31.123 { 00:21:31.123 "job": "nvme0n1", 00:21:31.123 "core_mask": "0x2", 00:21:31.123 "workload": "randread", 00:21:31.123 "status": "finished", 00:21:31.123 "queue_depth": 128, 00:21:31.123 "io_size": 4096, 00:21:31.123 "runtime": 2.010064, 00:21:31.123 "iops": 17754.658558135463, 00:21:31.123 "mibps": 69.35413499271665, 00:21:31.123 "io_failed": 0, 00:21:31.123 "io_timeout": 0, 00:21:31.123 "avg_latency_us": 7203.971779666198, 00:21:31.123 "min_latency_us": 6702.545454545455, 00:21:31.123 "max_latency_us": 16443.578181818182 00:21:31.123 } 00:21:31.123 ], 00:21:31.123 "core_count": 1 00:21:31.123 } 00:21:31.123 16:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:31.123 16:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:31.123 16:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:31.123 16:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:31.123 16:56:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:31.123 | select(.opcode=="crc32c") 00:21:31.123 | "\(.module_name) \(.executed)"' 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96571 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 96571 ']' 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 96571 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96571 00:21:31.690 killing process with pid 96571 00:21:31.690 Received shutdown signal, test time was about 2.000000 seconds 00:21:31.690 00:21:31.690 Latency(us) 00:21:31.690 [2024-11-29T16:56:55.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.690 [2024-11-29T16:56:55.482Z] =================================================================================================================== 00:21:31.690 [2024-11-29T16:56:55.482Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96571' 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 96571 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 96571 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96627 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96627 /var/tmp/bperf.sock 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 96627 ']' 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:31.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.690 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:31.690 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:31.690 Zero copy mechanism will not be used. 00:21:31.690 [2024-11-29 16:56:55.408295] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:31.690 [2024-11-29 16:56:55.408393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96627 ] 00:21:31.949 [2024-11-29 16:56:55.527732] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:31.949 [2024-11-29 16:56:55.553022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.949 [2024-11-29 16:56:55.572179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.949 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.949 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:31.949 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:31.949 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:31.949 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:32.209 [2024-11-29 16:56:55.880284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:32.209 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:32.209 16:56:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:32.468 nvme0n1 00:21:32.468 16:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:32.468 16:56:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:32.726 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:32.726 Zero copy mechanism will not be used. 00:21:32.726 Running I/O for 2 seconds... 00:21:34.597 8720.00 IOPS, 1090.00 MiB/s [2024-11-29T16:56:58.389Z] 8728.00 IOPS, 1091.00 MiB/s 00:21:34.597 Latency(us) 00:21:34.597 [2024-11-29T16:56:58.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.597 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:34.597 nvme0n1 : 2.00 8726.61 1090.83 0.00 0.00 1830.61 1638.40 4379.00 00:21:34.597 [2024-11-29T16:56:58.389Z] =================================================================================================================== 00:21:34.597 [2024-11-29T16:56:58.389Z] Total : 8726.61 1090.83 0.00 0.00 1830.61 1638.40 4379.00 00:21:34.597 { 00:21:34.597 "results": [ 00:21:34.597 { 00:21:34.597 "job": "nvme0n1", 00:21:34.597 "core_mask": "0x2", 00:21:34.597 "workload": "randread", 00:21:34.597 "status": "finished", 00:21:34.597 "queue_depth": 16, 00:21:34.597 "io_size": 131072, 00:21:34.597 "runtime": 2.002153, 00:21:34.597 "iops": 8726.605808846776, 00:21:34.597 "mibps": 1090.825726105847, 00:21:34.597 "io_failed": 0, 00:21:34.597 "io_timeout": 0, 00:21:34.597 "avg_latency_us": 1830.6060872460873, 00:21:34.597 "min_latency_us": 1638.4, 00:21:34.597 "max_latency_us": 4378.996363636364 00:21:34.597 } 00:21:34.597 ], 00:21:34.597 "core_count": 1 00:21:34.597 } 00:21:34.597 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:34.597 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:34.597 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:34.597 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:34.597 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:34.597 | select(.opcode=="crc32c") 00:21:34.597 | "\(.module_name) \(.executed)"' 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96627 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 96627 ']' 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 96627 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96627 00:21:35.165 killing process with pid 96627 00:21:35.165 Received shutdown signal, test time was about 2.000000 seconds 00:21:35.165 00:21:35.165 Latency(us) 00:21:35.165 [2024-11-29T16:56:58.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.165 [2024-11-29T16:56:58.957Z] =================================================================================================================== 00:21:35.165 [2024-11-29T16:56:58.957Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96627' 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 96627 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 96627 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96674 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96674 /var/tmp/bperf.sock 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 96674 ']' 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:35.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.165 16:56:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:35.165 [2024-11-29 16:56:58.856702] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:35.165 [2024-11-29 16:56:58.856810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96674 ] 00:21:35.424 [2024-11-29 16:56:58.984832] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:35.424 [2024-11-29 16:56:59.002884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.424 [2024-11-29 16:56:59.021967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.359 16:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.359 16:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:36.359 16:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:36.359 16:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:36.359 16:56:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:36.359 [2024-11-29 16:57:00.050653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:36.359 16:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:36.359 16:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:36.926 nvme0n1 00:21:36.926 16:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:36.926 16:57:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:36.926 Running I/O for 2 seconds... 00:21:38.797 18162.00 IOPS, 70.95 MiB/s [2024-11-29T16:57:02.589Z] 18034.50 IOPS, 70.45 MiB/s 00:21:38.797 Latency(us) 00:21:38.797 [2024-11-29T16:57:02.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.797 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:38.797 nvme0n1 : 2.01 18033.79 70.44 0.00 0.00 7091.62 4379.00 15490.33 00:21:38.797 [2024-11-29T16:57:02.589Z] =================================================================================================================== 00:21:38.797 [2024-11-29T16:57:02.589Z] Total : 18033.79 70.44 0.00 0.00 7091.62 4379.00 15490.33 00:21:38.797 { 00:21:38.797 "results": [ 00:21:38.797 { 00:21:38.797 "job": "nvme0n1", 00:21:38.797 "core_mask": "0x2", 00:21:38.797 "workload": "randwrite", 00:21:38.797 "status": "finished", 00:21:38.797 "queue_depth": 128, 00:21:38.797 "io_size": 4096, 00:21:38.797 "runtime": 2.007176, 00:21:38.797 "iops": 18033.79474445689, 00:21:38.797 "mibps": 70.44451072053472, 00:21:38.797 "io_failed": 0, 00:21:38.797 "io_timeout": 0, 00:21:38.797 "avg_latency_us": 7091.624871774909, 00:21:38.797 "min_latency_us": 4378.996363636364, 00:21:38.797 "max_latency_us": 15490.327272727272 00:21:38.797 } 00:21:38.797 ], 00:21:38.797 "core_count": 1 00:21:38.797 } 00:21:38.797 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:38.797 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:38.797 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:38.797 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:38.797 | select(.opcode=="crc32c") 00:21:38.797 | "\(.module_name) \(.executed)"' 00:21:38.797 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96674 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 96674 ']' 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 96674 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96674 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:39.366 killing process with pid 96674 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96674' 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 96674 00:21:39.366 Received shutdown signal, test time was about 2.000000 seconds 00:21:39.366 00:21:39.366 Latency(us) 00:21:39.366 [2024-11-29T16:57:03.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.366 [2024-11-29T16:57:03.158Z] =================================================================================================================== 00:21:39.366 [2024-11-29T16:57:03.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.366 16:57:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 96674 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96730 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96730 /var/tmp/bperf.sock 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 96730 ']' 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.366 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:39.366 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:39.366 Zero copy mechanism will not be used. 00:21:39.366 [2024-11-29 16:57:03.061065] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:39.366 [2024-11-29 16:57:03.061159] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96730 ] 00:21:39.625 [2024-11-29 16:57:03.180038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:39.625 [2024-11-29 16:57:03.204641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.625 [2024-11-29 16:57:03.225226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.625 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.625 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:39.625 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:39.625 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:39.625 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:39.884 [2024-11-29 16:57:03.609790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:39.884 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:39.884 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:40.451 nvme0n1 00:21:40.451 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:40.451 16:57:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:40.451 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:40.451 Zero copy mechanism will not be used. 00:21:40.451 Running I/O for 2 seconds... 00:21:42.323 7188.00 IOPS, 898.50 MiB/s [2024-11-29T16:57:06.115Z] 7218.50 IOPS, 902.31 MiB/s 00:21:42.323 Latency(us) 00:21:42.323 [2024-11-29T16:57:06.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.323 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:42.323 nvme0n1 : 2.00 7216.81 902.10 0.00 0.00 2211.99 1608.61 11856.06 00:21:42.323 [2024-11-29T16:57:06.115Z] =================================================================================================================== 00:21:42.323 [2024-11-29T16:57:06.115Z] Total : 7216.81 902.10 0.00 0.00 2211.99 1608.61 11856.06 00:21:42.323 { 00:21:42.323 "results": [ 00:21:42.323 { 00:21:42.323 "job": "nvme0n1", 00:21:42.323 "core_mask": "0x2", 00:21:42.323 "workload": "randwrite", 00:21:42.323 "status": "finished", 00:21:42.323 "queue_depth": 16, 00:21:42.323 "io_size": 131072, 00:21:42.323 "runtime": 2.003517, 00:21:42.323 "iops": 7216.8092409497895, 00:21:42.323 "mibps": 902.1011551187237, 00:21:42.323 "io_failed": 0, 00:21:42.323 "io_timeout": 0, 00:21:42.323 "avg_latency_us": 2211.9945184188523, 00:21:42.323 "min_latency_us": 1608.610909090909, 00:21:42.323 "max_latency_us": 11856.058181818182 00:21:42.323 } 00:21:42.323 ], 00:21:42.323 "core_count": 1 00:21:42.323 } 00:21:42.581 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:42.581 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:42.581 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:42.581 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:42.581 | select(.opcode=="crc32c") 00:21:42.581 | "\(.module_name) \(.executed)"' 00:21:42.581 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96730 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 96730 ']' 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 96730 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96730 00:21:42.841 killing process with pid 96730 00:21:42.841 Received shutdown signal, test time was about 2.000000 seconds 00:21:42.841 00:21:42.841 Latency(us) 00:21:42.841 [2024-11-29T16:57:06.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.841 [2024-11-29T16:57:06.633Z] =================================================================================================================== 00:21:42.841 [2024-11-29T16:57:06.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96730' 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 96730 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 96730 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 96539 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 96539 ']' 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 96539 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96539 00:21:42.841 killing process with pid 96539 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96539' 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 96539 00:21:42.841 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 96539 00:21:43.101 ************************************ 00:21:43.102 END TEST nvmf_digest_clean 00:21:43.102 ************************************ 00:21:43.102 00:21:43.102 real 0m16.606s 00:21:43.102 user 0m32.214s 00:21:43.102 sys 0m4.375s 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:43.102 ************************************ 00:21:43.102 START TEST nvmf_digest_error 00:21:43.102 ************************************ 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=96806 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 96806 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96806 ']' 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.102 16:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:43.102 [2024-11-29 16:57:06.800579] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:43.102 [2024-11-29 16:57:06.800678] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.362 [2024-11-29 16:57:06.930829] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:43.362 [2024-11-29 16:57:06.949189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.362 [2024-11-29 16:57:06.966858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.362 [2024-11-29 16:57:06.966917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.362 [2024-11-29 16:57:06.966928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.362 [2024-11-29 16:57:06.966935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.362 [2024-11-29 16:57:06.966942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.362 [2024-11-29 16:57:06.967236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:44.300 [2024-11-29 16:57:07.795831] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:44.300 [2024-11-29 16:57:07.830426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:44.300 null0 00:21:44.300 [2024-11-29 16:57:07.862070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.300 [2024-11-29 16:57:07.886172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96838 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96838 /var/tmp/bperf.sock 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96838 ']' 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:44.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.300 16:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:44.300 [2024-11-29 16:57:07.939306] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:44.300 [2024-11-29 16:57:07.939422] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96838 ] 00:21:44.300 [2024-11-29 16:57:08.058077] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:44.300 [2024-11-29 16:57:08.086057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.560 [2024-11-29 16:57:08.111267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.560 [2024-11-29 16:57:08.145618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:44.560 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.560 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:44.560 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:44.560 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:44.819 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:44.819 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.819 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:44.819 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.819 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.819 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:45.079 nvme0n1 00:21:45.079 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:45.079 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.079 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:45.079 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.079 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:45.079 16:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:45.079 Running I/O for 2 seconds... 00:21:45.079 [2024-11-29 16:57:08.833807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.079 [2024-11-29 16:57:08.833885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.079 [2024-11-29 16:57:08.833899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.079 [2024-11-29 16:57:08.848216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.079 [2024-11-29 16:57:08.848267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.079 [2024-11-29 16:57:08.848295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.079 [2024-11-29 16:57:08.862416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.079 [2024-11-29 16:57:08.862467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.079 [2024-11-29 16:57:08.862496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.338 [2024-11-29 16:57:08.878227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:08.878277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:08.878306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:08.892789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:08.892840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:08.892868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:08.907109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:08.907159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:08.907187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:08.921415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:08.921464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:08.921492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:08.935743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:08.935814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:08.935843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:08.950003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:08.950053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:08.950081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:08.965175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:08.965227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:08.965256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:08.980396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:08.980456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:08.980485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:08.994999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:08.995049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:08.995078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:09.009458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:09.009509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:09.009537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:09.023840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:09.023876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:09.023905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:09.038061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:09.038110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:09.038138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:09.052491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:09.052540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:09.052568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:09.066676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:09.066725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:09.066769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:09.081148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:09.081199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:09.081227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:09.095463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:09.095499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:09.095528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:09.110176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:09.110226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:09.110253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.339 [2024-11-29 16:57:09.124671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.339 [2024-11-29 16:57:09.124723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.339 [2024-11-29 16:57:09.124752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.140231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.140280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.140308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.154513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.154562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.154589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.168990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.169039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.169066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.183578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.183613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.183641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.197998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.198047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.198075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.212418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.212467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.212495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.226524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.226572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.226600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.240877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.240926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.240953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.255078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.255126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.255153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.269588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.269621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.269634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.285837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.285886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.285898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.302487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.302523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.302535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.318029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.318078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.318090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.333356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.333404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.333416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.348482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.348529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.348541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.363478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.363510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.363521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.599 [2024-11-29 16:57:09.378440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.599 [2024-11-29 16:57:09.378488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.599 [2024-11-29 16:57:09.378500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.394791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.394839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.394851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.409965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.410016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.410028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.425093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.425142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.425153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.440196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.440239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.440250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.456215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.456261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.456290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.474202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.474253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.474281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.490944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.491005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.491035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.506463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.506514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.506542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.521098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.521148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.521176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.535463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.535498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.535526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.549886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.549935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.549962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.564157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.564222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.564249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.578405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.578454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.578482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.592641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.592675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.592703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.606853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.606904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.606931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.860 [2024-11-29 16:57:09.621317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.860 [2024-11-29 16:57:09.621376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.860 [2024-11-29 16:57:09.621405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.861 [2024-11-29 16:57:09.635956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:45.861 [2024-11-29 16:57:09.635991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.861 [2024-11-29 16:57:09.636019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.120 [2024-11-29 16:57:09.651111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.120 [2024-11-29 16:57:09.651161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.120 [2024-11-29 16:57:09.651190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.120 [2024-11-29 16:57:09.666099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.120 [2024-11-29 16:57:09.666148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.120 [2024-11-29 16:57:09.666176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.120 [2024-11-29 16:57:09.680592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.120 [2024-11-29 16:57:09.680642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.120 [2024-11-29 16:57:09.680670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.120 [2024-11-29 16:57:09.694985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.120 [2024-11-29 16:57:09.695034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.120 [2024-11-29 16:57:09.695062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.120 [2024-11-29 16:57:09.709355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.120 [2024-11-29 16:57:09.709404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.120 [2024-11-29 16:57:09.709432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.120 [2024-11-29 16:57:09.723545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.120 [2024-11-29 16:57:09.723580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.120 [2024-11-29 16:57:09.723607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.120 [2024-11-29 16:57:09.737784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.120 [2024-11-29 16:57:09.737834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.120 [2024-11-29 16:57:09.737861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.120 [2024-11-29 16:57:09.752183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.120 [2024-11-29 16:57:09.752231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.120 [2024-11-29 16:57:09.752258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.120 [2024-11-29 16:57:09.772586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.120 [2024-11-29 16:57:09.772634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.120 [2024-11-29 16:57:09.772661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.120 [2024-11-29 16:57:09.786801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.121 [2024-11-29 16:57:09.786850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.121 [2024-11-29 16:57:09.786877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.121 [2024-11-29 16:57:09.801122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.121 [2024-11-29 16:57:09.801172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.121 [2024-11-29 16:57:09.801200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.121 17079.00 IOPS, 66.71 MiB/s [2024-11-29T16:57:09.913Z] [2024-11-29 16:57:09.815613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.121 [2024-11-29 16:57:09.815643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.121 [2024-11-29 16:57:09.815670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.121 [2024-11-29 16:57:09.829878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.121 [2024-11-29 16:57:09.829927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.121 [2024-11-29 16:57:09.829955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.121 [2024-11-29 16:57:09.844268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.121 [2024-11-29 16:57:09.844317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.121 [2024-11-29 16:57:09.844357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.121 [2024-11-29 16:57:09.858480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.121 [2024-11-29 16:57:09.858529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.121 [2024-11-29 16:57:09.858556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.121 [2024-11-29 16:57:09.872978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.121 [2024-11-29 16:57:09.873028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.121 [2024-11-29 16:57:09.873057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.121 [2024-11-29 16:57:09.888001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.121 [2024-11-29 16:57:09.888037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.121 [2024-11-29 16:57:09.888065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.121 [2024-11-29 16:57:09.902890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.121 [2024-11-29 16:57:09.902940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.121 [2024-11-29 16:57:09.902967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:09.918758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:09.918809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:09.918836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:09.933163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:09.933213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:09.933241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:09.947477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:09.947526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:09.947553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:09.962036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:09.962087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:09.962115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:09.977069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:09.977118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:09.977145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:09.991422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:09.991471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:09.991498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:10.006440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:10.006492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:10.006522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:10.027221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:10.027291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:10.027319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:10.042301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:10.042359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:10.042388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:10.056765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:10.056813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:10.056841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:10.071224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:10.071273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:10.071300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:10.085686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:10.085734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:10.085762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:10.100135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:10.100168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:10.100179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:10.114318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:10.114391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:10.114403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:10.128712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:10.128760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:10.128788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:10.142938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:10.142987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:10.143015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.381 [2024-11-29 16:57:10.157403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.381 [2024-11-29 16:57:10.157451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.381 [2024-11-29 16:57:10.157478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.173299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.173358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.173388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.188126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.188221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.188249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.202564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.202613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.202640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.217187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.217236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.217264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.231623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.231659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.231687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.246043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.246094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.246122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.260473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.260521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.260549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.274712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.274761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.274788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.289052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.289101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.289129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.303322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.303380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.303409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.317550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.317598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.317626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.332238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.332286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.332315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.346451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.346500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.346528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.360872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.360920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.360947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.641 [2024-11-29 16:57:10.375187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.641 [2024-11-29 16:57:10.375236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-29 16:57:10.375263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.642 [2024-11-29 16:57:10.389475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.642 [2024-11-29 16:57:10.389522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-29 16:57:10.389550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.642 [2024-11-29 16:57:10.403750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.642 [2024-11-29 16:57:10.403819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-29 16:57:10.403848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.642 [2024-11-29 16:57:10.418053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.642 [2024-11-29 16:57:10.418101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-29 16:57:10.418130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.901 [2024-11-29 16:57:10.433453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.901 [2024-11-29 16:57:10.433501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.901 [2024-11-29 16:57:10.433528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.901 [2024-11-29 16:57:10.448369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.901 [2024-11-29 16:57:10.448428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.901 [2024-11-29 16:57:10.448457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.901 [2024-11-29 16:57:10.462563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.901 [2024-11-29 16:57:10.462611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.901 [2024-11-29 16:57:10.462638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.901 [2024-11-29 16:57:10.478296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.901 [2024-11-29 16:57:10.478385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.901 [2024-11-29 16:57:10.478414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.901 [2024-11-29 16:57:10.495757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.901 [2024-11-29 16:57:10.495828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.901 [2024-11-29 16:57:10.495858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.901 [2024-11-29 16:57:10.512639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.901 [2024-11-29 16:57:10.512690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.901 [2024-11-29 16:57:10.512702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.901 [2024-11-29 16:57:10.529604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.901 [2024-11-29 16:57:10.529655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.901 [2024-11-29 16:57:10.529699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.901 [2024-11-29 16:57:10.545261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.901 [2024-11-29 16:57:10.545310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.901 [2024-11-29 16:57:10.545338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.901 [2024-11-29 16:57:10.560516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.901 [2024-11-29 16:57:10.560564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.901 [2024-11-29 16:57:10.560592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.901 [2024-11-29 16:57:10.575674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.901 [2024-11-29 16:57:10.575709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.901 [2024-11-29 16:57:10.575737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.901 [2024-11-29 16:57:10.590846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.901 [2024-11-29 16:57:10.590895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.902 [2024-11-29 16:57:10.590923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.902 [2024-11-29 16:57:10.606132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.902 [2024-11-29 16:57:10.606196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.902 [2024-11-29 16:57:10.606224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.902 [2024-11-29 16:57:10.621511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.902 [2024-11-29 16:57:10.621546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.902 [2024-11-29 16:57:10.621574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.902 [2024-11-29 16:57:10.636627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.902 [2024-11-29 16:57:10.636676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.902 [2024-11-29 16:57:10.636704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.902 [2024-11-29 16:57:10.651920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.902 [2024-11-29 16:57:10.651956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.902 [2024-11-29 16:57:10.651984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.902 [2024-11-29 16:57:10.667071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.902 [2024-11-29 16:57:10.667120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.902 [2024-11-29 16:57:10.667148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.902 [2024-11-29 16:57:10.682994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:46.902 [2024-11-29 16:57:10.683045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.902 [2024-11-29 16:57:10.683073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:47.161 [2024-11-29 16:57:10.699572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:47.161 [2024-11-29 16:57:10.699607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.161 [2024-11-29 16:57:10.699635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:47.161 [2024-11-29 16:57:10.715245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:47.161 [2024-11-29 16:57:10.715295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.161 [2024-11-29 16:57:10.715323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:47.161 [2024-11-29 16:57:10.735895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:47.161 [2024-11-29 16:57:10.735932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.161 [2024-11-29 16:57:10.735961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:47.161 [2024-11-29 16:57:10.750506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:47.161 [2024-11-29 16:57:10.750557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.161 [2024-11-29 16:57:10.750586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:47.161 [2024-11-29 16:57:10.764980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:47.161 [2024-11-29 16:57:10.765029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.161 [2024-11-29 16:57:10.765057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:47.161 [2024-11-29 16:57:10.779923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:47.161 [2024-11-29 16:57:10.779958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.161 [2024-11-29 16:57:10.779986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:47.161 [2024-11-29 16:57:10.794138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:47.161 [2024-11-29 16:57:10.794188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.162 [2024-11-29 16:57:10.794215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:47.162 [2024-11-29 16:57:10.808609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19724b0) 00:21:47.162 [2024-11-29 16:57:10.808658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.162 [2024-11-29 16:57:10.808686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:47.162 17015.00 IOPS, 66.46 MiB/s 00:21:47.162 Latency(us) 00:21:47.162 [2024-11-29T16:57:10.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.162 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:47.162 nvme0n1 : 2.00 17041.74 66.57 0.00 0.00 7505.99 6821.70 27405.96 00:21:47.162 [2024-11-29T16:57:10.954Z] =================================================================================================================== 00:21:47.162 [2024-11-29T16:57:10.954Z] Total : 17041.74 66.57 0.00 0.00 7505.99 6821.70 27405.96 00:21:47.162 { 00:21:47.162 "results": [ 00:21:47.162 { 00:21:47.162 "job": "nvme0n1", 00:21:47.162 "core_mask": "0x2", 00:21:47.162 "workload": "randread", 00:21:47.162 "status": "finished", 00:21:47.162 "queue_depth": 128, 00:21:47.162 "io_size": 4096, 00:21:47.162 "runtime": 2.004373, 00:21:47.162 "iops": 17041.738239339684, 00:21:47.162 "mibps": 66.56928999742064, 00:21:47.162 "io_failed": 0, 00:21:47.162 "io_timeout": 0, 00:21:47.162 "avg_latency_us": 7505.986224869456, 00:21:47.162 "min_latency_us": 6821.701818181818, 00:21:47.162 "max_latency_us": 27405.963636363635 00:21:47.162 } 00:21:47.162 ], 00:21:47.162 "core_count": 1 00:21:47.162 } 00:21:47.162 16:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:47.162 16:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:47.162 16:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:47.162 16:57:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:47.162 | .driver_specific 00:21:47.162 | .nvme_error 00:21:47.162 | .status_code 00:21:47.162 | .command_transient_transport_error' 00:21:47.421 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 133 > 0 )) 00:21:47.421 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96838 00:21:47.421 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96838 ']' 00:21:47.421 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96838 00:21:47.421 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:47.421 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.421 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96838 00:21:47.421 killing process with pid 96838 00:21:47.421 Received shutdown signal, test time was about 2.000000 seconds 00:21:47.421 00:21:47.421 Latency(us) 00:21:47.421 [2024-11-29T16:57:11.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.421 [2024-11-29T16:57:11.213Z] =================================================================================================================== 00:21:47.421 [2024-11-29T16:57:11.213Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:47.421 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:47.421 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:47.421 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96838' 00:21:47.421 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96838 00:21:47.421 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96838 00:21:47.680 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:47.680 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:47.681 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:47.681 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:47.681 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:47.681 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96885 00:21:47.681 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96885 /var/tmp/bperf.sock 00:21:47.681 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:47.681 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96885 ']' 00:21:47.681 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:47.681 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.681 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:47.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:47.681 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.681 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:47.681 [2024-11-29 16:57:11.368035] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:47.681 [2024-11-29 16:57:11.368312] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96885 ] 00:21:47.681 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:47.681 Zero copy mechanism will not be used. 00:21:47.940 [2024-11-29 16:57:11.494055] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:47.940 [2024-11-29 16:57:11.520726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.940 [2024-11-29 16:57:11.540650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.940 [2024-11-29 16:57:11.569356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:47.940 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.940 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:47.940 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:47.941 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:48.200 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:48.200 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.200 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:48.200 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.200 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.200 16:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.458 nvme0n1 00:21:48.458 16:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:48.458 16:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.458 16:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:48.458 16:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.458 16:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:48.458 16:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:48.718 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:48.718 Zero copy mechanism will not be used. 00:21:48.718 Running I/O for 2 seconds... 00:21:48.718 [2024-11-29 16:57:12.295275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.718 [2024-11-29 16:57:12.295494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.718 [2024-11-29 16:57:12.295513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.718 [2024-11-29 16:57:12.299529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.718 [2024-11-29 16:57:12.299564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.718 [2024-11-29 16:57:12.299593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.718 [2024-11-29 16:57:12.303566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.718 [2024-11-29 16:57:12.303602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.718 [2024-11-29 16:57:12.303631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.718 [2024-11-29 16:57:12.307537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.718 [2024-11-29 16:57:12.307572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.718 [2024-11-29 16:57:12.307601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.718 [2024-11-29 16:57:12.311452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.718 [2024-11-29 16:57:12.311485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.718 [2024-11-29 16:57:12.311514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.718 [2024-11-29 16:57:12.315342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.718 [2024-11-29 16:57:12.315374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.718 [2024-11-29 16:57:12.315402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.319390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.319424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.319453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.323263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.323482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.323498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.327390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.327425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.327453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.331347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.331380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.331408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.335245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.335455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.335472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.339452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.339487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.339499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.343371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.343407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.343419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.347243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.347444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.347462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.351424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.351460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.351472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.355317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.355363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.355375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.359249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.359418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.359435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.363323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.363370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.363382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.367188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.367365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.367384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.371390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.371426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.371438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.375316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.375535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.375552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.379510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.379545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.379558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.383413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.383446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.383475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.387322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.387530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.387547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.391588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.391624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.391637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.395586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.395621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.395633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.399535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.399571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.399583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.403527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.403560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.403588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.407414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.407446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.407474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.411338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.411370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.411397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.415177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.415388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.415406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.419168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.719 [2024-11-29 16:57:12.419355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-11-29 16:57:12.419372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.719 [2024-11-29 16:57:12.423200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.423407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.423425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.427357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.427391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.427419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.431322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.431523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.431540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.435583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.435619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.435646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.439499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.439532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.439560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.443441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.443476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.443488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.447380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.447415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.447426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.451296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.451501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.451518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.455522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.455558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.455570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.459500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.459536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.459548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.463337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.463370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.463398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.467215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.467427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.467444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.471456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.471491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.471503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.475247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.475462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.475478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.479421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.479454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.479482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.483337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.483370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.483398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.487252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.487461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.487479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.491466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.491499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.491527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.495314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.495505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.495521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.499470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.499503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.499531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.503315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.503358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.503387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.720 [2024-11-29 16:57:12.507724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.720 [2024-11-29 16:57:12.507757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.720 [2024-11-29 16:57:12.507825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.511908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.511947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.511959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.516384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.516463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.516477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.520843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.520879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.520908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.525328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.525421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.525436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.530002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.530055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.530067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.534823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.534994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.535011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.539552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.539607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.539621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.544173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.544208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.544236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.548532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.548584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.548596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.552905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.552938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.552966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.557153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.557187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.557216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.561339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.561414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.561428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.565250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.565463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.565480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.569411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.569445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.569473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.573364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.573396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.573425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.577281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.981 [2024-11-29 16:57:12.577496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.981 [2024-11-29 16:57:12.577512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.981 [2024-11-29 16:57:12.581466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.581500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.581528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.585415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.585450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.585462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.589257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.589469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.589485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.593453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.593487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.593515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.597349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.597391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.597420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.601196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.601410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.601427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.605454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.605490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.605502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.609386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.609420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.609448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.613293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.613511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.613529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.617439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.617473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.617501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.621378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.621412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.621440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.625270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.625481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.625498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.629401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.629435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.629463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.633264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.633475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.633493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.637494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.637527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.637555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.641397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.641430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.641458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.645277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.645491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.645508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.649425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.649460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.649488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.653309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.653500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.653516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.657475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.657509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.657537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.661390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.661422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.661451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.665203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.665414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.665431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.669299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.669514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.669531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.673502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.673536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.673563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.677415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.677449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.677477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.681365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.681397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.681425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.982 [2024-11-29 16:57:12.685160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.982 [2024-11-29 16:57:12.685370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.982 [2024-11-29 16:57:12.685388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.689335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.689367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.689396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.693243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.693453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.693470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.697400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.697433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.697462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.701316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.701359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.701387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.705168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.705380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.705398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.709351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.709384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.709413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.713230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.713444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.713461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.717487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.717521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.717550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.721395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.721428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.721456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.725265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.725478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.725494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.729370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.729403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.729431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.733236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.733448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.733464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.737363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.737396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.737424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.741231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.741442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.741459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.745392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.745426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.745454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.749321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.749362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.749391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.753247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.753460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.753476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.757447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.757480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.757509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.761372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.761404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.761432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.765196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.765407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.765424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.983 [2024-11-29 16:57:12.769676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:48.983 [2024-11-29 16:57:12.769711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.983 [2024-11-29 16:57:12.769739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.773877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.773911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.773939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.777994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.778029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.778071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.782049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.782083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.782111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.786009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.786043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.786071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.790038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.790072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.790099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.794080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.794114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.794142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.798050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.798083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.798111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.802066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.802100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.802129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.806104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.806139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.806167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.810164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.810200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.810228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.814159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.814192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.814220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.818153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.818187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.818215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.822785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.822819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.822848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.826722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.826758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.826770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.830577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.830610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.830638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.834534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.834567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.834594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.838390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.838423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.838451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.842290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.842323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.842379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.846175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.846208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.244 [2024-11-29 16:57:12.846236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.244 [2024-11-29 16:57:12.850107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.244 [2024-11-29 16:57:12.850141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.850169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.853990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.854023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.854051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.857908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.857942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.857970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.861833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.861866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.861894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.865740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.865774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.865802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.869682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.869715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.869743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.873778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.873811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.873838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.877738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.877773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.877786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.881716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.881750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.881777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.885642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.885675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.885703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.889635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.889668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.889697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.893601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.893634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.893661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.897562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.897596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.897624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.901452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.901485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.901513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.905373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.905407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.905434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.909285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.909508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.909525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.913980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.914016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.914044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.918258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.918294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.918323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.922548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.922583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.922611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.927034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.927070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.927099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.931366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.931430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.931442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.935923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.935963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.935977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.940394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.940472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.940487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.944727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.944762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.944789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.948828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.948863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.948891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.953024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.953059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.953086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.957225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.957260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.245 [2024-11-29 16:57:12.957288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.245 [2024-11-29 16:57:12.961689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.245 [2024-11-29 16:57:12.961753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:12.961781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:12.965804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:12.965839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:12.965867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:12.969863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:12.969898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:12.969925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:12.974093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:12.974128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:12.974157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:12.978572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:12.978607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:12.978636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:12.982736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:12.982770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:12.982798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:12.986742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:12.986776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:12.986804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:12.990787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:12.990822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:12.990851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:12.994877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:12.994912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:12.994941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:12.999164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:12.999198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:12.999226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:13.003354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:13.003390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:13.003418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:13.007356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:13.007390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:13.007419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:13.011399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:13.011432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:13.011460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:13.015537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:13.015572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:13.015585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:13.019733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:13.019770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:13.019822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:13.023804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:13.023857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:13.023871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:13.027940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:13.027977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:13.027990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.246 [2024-11-29 16:57:13.032345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.246 [2024-11-29 16:57:13.032424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.246 [2024-11-29 16:57:13.032454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.506 [2024-11-29 16:57:13.036910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.036946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.036975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.041194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.041229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.041257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.045314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.045373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.045402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.049370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.049405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.049433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.053482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.053516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.053529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.057672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.057708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.057750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.061758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.061792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.061820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.065865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.065900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.065929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.069963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.069998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.070027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.074123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.074170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.074182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.078228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.078275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.078286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.082388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.082435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.082445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.086560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.086590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.086616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.090675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.090707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.090718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.094754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.094801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.094812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.098825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.098872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.098883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.103105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.103153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.103164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.107160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.107208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.107219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.111183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.111231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.111242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.115247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.115293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.115304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.119557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.119588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.119599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.123672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.123715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.123726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.127734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.127766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.127802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.131884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.131917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.131929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.135856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.135889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.135900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.139744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.139799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.507 [2024-11-29 16:57:13.139827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.507 [2024-11-29 16:57:13.143628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.507 [2024-11-29 16:57:13.143674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.143685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.147527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.147572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.147582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.151350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.151405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.151417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.155174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.155220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.155231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.159276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.159310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.159321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.163480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.163510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.163521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.167293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.167349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.167362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.171266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.171298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.171310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.175328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.175369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.175380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.179250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.179281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.179292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.183320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.183377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.183388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.187252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.187298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.187309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.191148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.191193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.191204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.195224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.195271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.195282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.199216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.199263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.199275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.203263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.203311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.203323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.207212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.207260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.207271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.211316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.211372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.211384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.215270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.215316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.215328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.219905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.219940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.219954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.224971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.225034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.225045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.229240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.229286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.229297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.233238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.233284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.233294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.237146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.237193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.237204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.241100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.241146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.241156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.245210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.245257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.245268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.249190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.249237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.508 [2024-11-29 16:57:13.249248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.508 [2024-11-29 16:57:13.253176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.508 [2024-11-29 16:57:13.253222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.509 [2024-11-29 16:57:13.253233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.509 [2024-11-29 16:57:13.257076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.509 [2024-11-29 16:57:13.257123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.509 [2024-11-29 16:57:13.257134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.509 [2024-11-29 16:57:13.261107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.509 [2024-11-29 16:57:13.261139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.509 [2024-11-29 16:57:13.261150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.509 [2024-11-29 16:57:13.265037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.509 [2024-11-29 16:57:13.265082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.509 [2024-11-29 16:57:13.265093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.509 [2024-11-29 16:57:13.268967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.509 [2024-11-29 16:57:13.269013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.509 [2024-11-29 16:57:13.269024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.509 [2024-11-29 16:57:13.272961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.509 [2024-11-29 16:57:13.273006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.509 [2024-11-29 16:57:13.273016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.509 [2024-11-29 16:57:13.276872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.509 [2024-11-29 16:57:13.276917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.509 [2024-11-29 16:57:13.276928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.509 [2024-11-29 16:57:13.280778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.509 [2024-11-29 16:57:13.280824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.509 [2024-11-29 16:57:13.280835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.509 [2024-11-29 16:57:13.284799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.509 [2024-11-29 16:57:13.284844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.509 [2024-11-29 16:57:13.284855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.509 7564.00 IOPS, 945.50 MiB/s [2024-11-29T16:57:13.301Z] [2024-11-29 16:57:13.290214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.509 [2024-11-29 16:57:13.290258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.509 [2024-11-29 16:57:13.290270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.509 [2024-11-29 16:57:13.294539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.509 [2024-11-29 16:57:13.294586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.509 [2024-11-29 16:57:13.294598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.769 [2024-11-29 16:57:13.298807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.769 [2024-11-29 16:57:13.298852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.769 [2024-11-29 16:57:13.298864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.769 [2024-11-29 16:57:13.302965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.303012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.303053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.307024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.307071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.307082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.310994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.311039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.311050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.314878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.314923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.314934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.318873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.318919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.318929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.322789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.322835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.322846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.326695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.326741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.326766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.330600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.330646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.330657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.334479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.334524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.334535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.338288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.338334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.338372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.342145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.342190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.342202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.346065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.346110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.346120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.350017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.350063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.350074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.353956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.354001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.354012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.357875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.357920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.357931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.361836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.361882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.361893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.365717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.365761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.365773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.369634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.369680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.369690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.373591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.373636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.373647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.377628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.377658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.377669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.381592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.381623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.381634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.385506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.385550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.385561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.389385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.389430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.389440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.393270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.393316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.393327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.397119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.397164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.397175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.401071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.401116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.770 [2024-11-29 16:57:13.401127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.770 [2024-11-29 16:57:13.405060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.770 [2024-11-29 16:57:13.405106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.405118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.408973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.409018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.409029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.412979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.413025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.413036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.416924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.416970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.416980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.420840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.420885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.420896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.424683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.424728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.424739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.428633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.428678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.428689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.432529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.432575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.432586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.436395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.436439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.436449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.440238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.440284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.440295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.444057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.444104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.444129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.447966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.448013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.448024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.451981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.452030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.452045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.455924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.455971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.455982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.459937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.459985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.459996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.463844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.463875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.463886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.467838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.467870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.467881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.471711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.471742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.471754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.475646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.475677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.475688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.479634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.479666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.479676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.483647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.483678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.483689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.487565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.487596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.487606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.491517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.491547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.491558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.495410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.495454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.495465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.499251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.499296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.499307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.503322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.503379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.503391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.507315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.507369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.507381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.771 [2024-11-29 16:57:13.511259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.771 [2024-11-29 16:57:13.511304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.771 [2024-11-29 16:57:13.511315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.772 [2024-11-29 16:57:13.515096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.772 [2024-11-29 16:57:13.515138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.772 [2024-11-29 16:57:13.515150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.772 [2024-11-29 16:57:13.518979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.772 [2024-11-29 16:57:13.519024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.772 [2024-11-29 16:57:13.519035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.772 [2024-11-29 16:57:13.522856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.772 [2024-11-29 16:57:13.522903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.772 [2024-11-29 16:57:13.522914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.772 [2024-11-29 16:57:13.526732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.772 [2024-11-29 16:57:13.526778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.772 [2024-11-29 16:57:13.526789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.772 [2024-11-29 16:57:13.530846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.772 [2024-11-29 16:57:13.530891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.772 [2024-11-29 16:57:13.530903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.772 [2024-11-29 16:57:13.534858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.772 [2024-11-29 16:57:13.534904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.772 [2024-11-29 16:57:13.534915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.772 [2024-11-29 16:57:13.539191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.772 [2024-11-29 16:57:13.539238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.772 [2024-11-29 16:57:13.539249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:49.772 [2024-11-29 16:57:13.543744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.772 [2024-11-29 16:57:13.543818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.772 [2024-11-29 16:57:13.543832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:49.772 [2024-11-29 16:57:13.548329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.772 [2024-11-29 16:57:13.548415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.772 [2024-11-29 16:57:13.548429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:49.772 [2024-11-29 16:57:13.553236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.772 [2024-11-29 16:57:13.553297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.772 [2024-11-29 16:57:13.553309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:49.772 [2024-11-29 16:57:13.558036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:49.772 [2024-11-29 16:57:13.558083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.772 [2024-11-29 16:57:13.558109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.562650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.562700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.562743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.567205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.567252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.567263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.571552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.571599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.571611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.575866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.575898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.575911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.579832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.579880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.579892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.583878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.583912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.583923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.587825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.587857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.587869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.591674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.591705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.591716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.595656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.595698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.595709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.599509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.599553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.599564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.603308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.603365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.603377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.607158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.607203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.607215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.611167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.611213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.611224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.615043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.615089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.615100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.618946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.032 [2024-11-29 16:57:13.618992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.032 [2024-11-29 16:57:13.619002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.032 [2024-11-29 16:57:13.622914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.622961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.622972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.626880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.626925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.626936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.630844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.630890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.630901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.634813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.634859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.634870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.638796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.638841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.638852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.642649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.642696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.642708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.646418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.646463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.646474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.650175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.650221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.650232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.654161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.654208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.654219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.658079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.658125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.658136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.662015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.662061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.662072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.665918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.665964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.665975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.669811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.669856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.669867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.673728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.673774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.673785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.677648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.677694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.677706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.681542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.681590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.681602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.685526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.685572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.685583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.689301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.689356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.689369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.693199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.693244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.693255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.697059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.697105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.697116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.700939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.700985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.700996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.704799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.704847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.704858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.708699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.708745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.708756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.712598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.712644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.712655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.716565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.716610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.716621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.720396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.720440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.720451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.724285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.724330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.724351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.728222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.728267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.728278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.033 [2024-11-29 16:57:13.732150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.033 [2024-11-29 16:57:13.732211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.033 [2024-11-29 16:57:13.732222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.736052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.736100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.736140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.740024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.740056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.740068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.743991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.744038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.744050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.747933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.747965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.747976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.751749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.751820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.751849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.755740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.755810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.755837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.759655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.759701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.759712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.763568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.763612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.763623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.767418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.767463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.767474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.771289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.771334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.771358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.775147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.775192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.775204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.779037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.779083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.779094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.782883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.782928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.782939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.786805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.786850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.786861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.790666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.790711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.790723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.794549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.794595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.794606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.798372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.798417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.798428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.802214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.802259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.802270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.806180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.806229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.806240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.810184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.810231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.810242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.814105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.814151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.814162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.034 [2024-11-29 16:57:13.818086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.034 [2024-11-29 16:57:13.818133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.034 [2024-11-29 16:57:13.818145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.822493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.822541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.822553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.826471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.826516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.826527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.830528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.830573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.830585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.834574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.834620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.834632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.838443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.838489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.838500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.842404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.842457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.842469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.846238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.846283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.846295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.850113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.850158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.850169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.854050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.854095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.854105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.857945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.857990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.858002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.861862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.861907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.861918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.865749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.865794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.865806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.869654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.869699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.869710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.873706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.873753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.873764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.877624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.877669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.877680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.881581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.881627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.881637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.885430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.885474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.885485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.889280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.889325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.889336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.893153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.893199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.893210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.897090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.897136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.897147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.901005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.901052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.901063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.905001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.905048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.905059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.908912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.908957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.908969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.912806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.912851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.912862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.916645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.916691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.916702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.295 [2024-11-29 16:57:13.920510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.295 [2024-11-29 16:57:13.920554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.295 [2024-11-29 16:57:13.920565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.924331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.924386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.924397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.928122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.928198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.928208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.932069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.932117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.932157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.936041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.936073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.936084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.939960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.940007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.940019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.943899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.943931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.943942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.947699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.947744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.947755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.951599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.951644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.951655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.955480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.955525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.955536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.959287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.959333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.959356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.963281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.963327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.963338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.967100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.967145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.967155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.971032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.971078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.971089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.974989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.975034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.975046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.978951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.978997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.979008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.982881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.982927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.982937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.986836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.986882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.986893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.990710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.990772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.990783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.994683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.994729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.994755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:13.998578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:13.998625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:13.998636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:14.002420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:14.002466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:14.002477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:14.006309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:14.006364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:14.006376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:14.010202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:14.010249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:14.010274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:14.014192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:14.014238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:14.014249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:14.018123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:14.018169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:14.018179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:14.022021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:14.022067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:14.022078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:14.025931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.296 [2024-11-29 16:57:14.025976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.296 [2024-11-29 16:57:14.025987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.296 [2024-11-29 16:57:14.029873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.297 [2024-11-29 16:57:14.029919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.297 [2024-11-29 16:57:14.029930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.297 [2024-11-29 16:57:14.033806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.297 [2024-11-29 16:57:14.033853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.297 [2024-11-29 16:57:14.033864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.297 [2024-11-29 16:57:14.037760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.297 [2024-11-29 16:57:14.037806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.297 [2024-11-29 16:57:14.037832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.297 [2024-11-29 16:57:14.041627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.297 [2024-11-29 16:57:14.041673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.297 [2024-11-29 16:57:14.041683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.297 [2024-11-29 16:57:14.045502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.297 [2024-11-29 16:57:14.045547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.297 [2024-11-29 16:57:14.045558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.297 [2024-11-29 16:57:14.049383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.297 [2024-11-29 16:57:14.049428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.297 [2024-11-29 16:57:14.049439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.297 [2024-11-29 16:57:14.053307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.297 [2024-11-29 16:57:14.053362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.297 [2024-11-29 16:57:14.053373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.297 [2024-11-29 16:57:14.057146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.297 [2024-11-29 16:57:14.057192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.297 [2024-11-29 16:57:14.057203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.297 [2024-11-29 16:57:14.061100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.297 [2024-11-29 16:57:14.061146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.297 [2024-11-29 16:57:14.061157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.297 [2024-11-29 16:57:14.065091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.297 [2024-11-29 16:57:14.065137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.297 [2024-11-29 16:57:14.065148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.297 [2024-11-29 16:57:14.069036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.297 [2024-11-29 16:57:14.069082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.297 [2024-11-29 16:57:14.069093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.297 [2024-11-29 16:57:14.072972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.297 [2024-11-29 16:57:14.073017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.297 [2024-11-29 16:57:14.073029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.297 [2024-11-29 16:57:14.076895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.297 [2024-11-29 16:57:14.076941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.297 [2024-11-29 16:57:14.076951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.297 [2024-11-29 16:57:14.080881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.297 [2024-11-29 16:57:14.080928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.297 [2024-11-29 16:57:14.080940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.557 [2024-11-29 16:57:14.085252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.557 [2024-11-29 16:57:14.085299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.557 [2024-11-29 16:57:14.085311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.557 [2024-11-29 16:57:14.089309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.557 [2024-11-29 16:57:14.089366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.557 [2024-11-29 16:57:14.089377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.557 [2024-11-29 16:57:14.093489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.557 [2024-11-29 16:57:14.093534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.557 [2024-11-29 16:57:14.093545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.557 [2024-11-29 16:57:14.097454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.557 [2024-11-29 16:57:14.097500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.557 [2024-11-29 16:57:14.097511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.557 [2024-11-29 16:57:14.101507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.557 [2024-11-29 16:57:14.101552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.557 [2024-11-29 16:57:14.101564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.557 [2024-11-29 16:57:14.105513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.557 [2024-11-29 16:57:14.105559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.557 [2024-11-29 16:57:14.105570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.109568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.109614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.109625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.113475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.113520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.113532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.117378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.117423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.117433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.121509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.121556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.121568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.125855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.125901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.125912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.130124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.130170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.130181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.134404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.134453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.134465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.138966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.139013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.139025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.143529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.143577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.143589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.147964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.147998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.148011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.152392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.152453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.152466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.156729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.156790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.156802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.160872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.160919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.160930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.165141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.165188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.165200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.169243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.169290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.169301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.173216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.173263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.173274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.177310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.177381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.177393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.181424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.181470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.181481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.185482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.185528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.185540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.189367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.189413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.189424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.193378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.193423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.193435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.197582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.197628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.197640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.201565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.201612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.201624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.205639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.205686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.205697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.209649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.209696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.209708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.213930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.213977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.213989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.558 [2024-11-29 16:57:14.217945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.558 [2024-11-29 16:57:14.217992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.558 [2024-11-29 16:57:14.218003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.222002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.222049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.222060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.226278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.226325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.226352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.230351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.230410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.230422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.234395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.234442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.234453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.238403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.238449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.238460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.242705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.242767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.242778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.246796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.246843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.246854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.250788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.250835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.250847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.254981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.255028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.255039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.259247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.259294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.259305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.263220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.263268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.263280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.267193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.267241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.267252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.271396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.271455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.271467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.275480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.275526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.275537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.279492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.279537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.279549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.559 [2024-11-29 16:57:14.283460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.283506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.283517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.559 7657.00 IOPS, 957.12 MiB/s [2024-11-29T16:57:14.351Z] [2024-11-29 16:57:14.289434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c04bb0) 00:21:50.559 [2024-11-29 16:57:14.289480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.559 [2024-11-29 16:57:14.289491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.559 00:21:50.559 Latency(us) 00:21:50.559 [2024-11-29T16:57:14.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.559 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:50.559 nvme0n1 : 2.00 7658.21 957.28 0.00 0.00 2086.17 1735.21 10009.13 00:21:50.559 [2024-11-29T16:57:14.351Z] =================================================================================================================== 00:21:50.559 [2024-11-29T16:57:14.351Z] Total : 7658.21 957.28 0.00 0.00 2086.17 1735.21 10009.13 00:21:50.559 { 00:21:50.559 "results": [ 00:21:50.559 { 00:21:50.559 "job": "nvme0n1", 00:21:50.559 "core_mask": "0x2", 00:21:50.559 "workload": "randread", 00:21:50.559 "status": "finished", 00:21:50.559 "queue_depth": 16, 00:21:50.559 "io_size": 131072, 00:21:50.559 "runtime": 2.003731, 00:21:50.559 "iops": 7658.213602524491, 00:21:50.559 "mibps": 957.2767003155614, 00:21:50.559 "io_failed": 0, 00:21:50.559 "io_timeout": 0, 00:21:50.559 "avg_latency_us": 2086.1686680292664, 00:21:50.559 "min_latency_us": 1735.2145454545455, 00:21:50.559 "max_latency_us": 10009.134545454546 00:21:50.559 } 00:21:50.559 ], 00:21:50.559 "core_count": 1 00:21:50.559 } 00:21:50.559 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:50.559 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:50.559 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:50.559 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:50.559 | .driver_specific 00:21:50.559 | .nvme_error 00:21:50.559 | .status_code 00:21:50.559 | .command_transient_transport_error' 00:21:50.834 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 495 > 0 )) 00:21:50.834 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96885 00:21:50.834 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96885 ']' 00:21:50.834 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96885 00:21:50.834 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:50.834 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.834 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96885 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:51.134 killing process with pid 96885 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96885' 00:21:51.134 Received shutdown signal, test time was about 2.000000 seconds 00:21:51.134 00:21:51.134 Latency(us) 00:21:51.134 [2024-11-29T16:57:14.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.134 [2024-11-29T16:57:14.926Z] =================================================================================================================== 00:21:51.134 [2024-11-29T16:57:14.926Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96885 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96885 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96939 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96939 /var/tmp/bperf.sock 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96939 ']' 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:51.134 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:51.135 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:51.135 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.135 16:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:51.135 [2024-11-29 16:57:14.800378] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:51.135 [2024-11-29 16:57:14.800479] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96939 ] 00:21:51.412 [2024-11-29 16:57:14.928832] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:51.412 [2024-11-29 16:57:14.947603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.412 [2024-11-29 16:57:14.967022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.412 [2024-11-29 16:57:14.996541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:51.988 16:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.989 16:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:51.989 16:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:51.989 16:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:52.247 16:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:52.247 16:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.247 16:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:52.247 16:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.247 16:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:52.247 16:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:52.816 nvme0n1 00:21:52.816 16:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:52.816 16:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.816 16:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:52.816 16:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.816 16:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:52.816 16:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:52.816 Running I/O for 2 seconds... 00:21:52.816 [2024-11-29 16:57:16.465946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef7100 00:21:52.816 [2024-11-29 16:57:16.467455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.816 [2024-11-29 16:57:16.467497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.816 [2024-11-29 16:57:16.480167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef7970 00:21:52.816 [2024-11-29 16:57:16.481715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.816 [2024-11-29 16:57:16.481774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.816 [2024-11-29 16:57:16.493776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef81e0 00:21:52.816 [2024-11-29 16:57:16.495185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.816 [2024-11-29 16:57:16.495231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:52.816 [2024-11-29 16:57:16.507229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef8a50 00:21:52.816 [2024-11-29 16:57:16.508706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.816 [2024-11-29 16:57:16.508749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:52.816 [2024-11-29 16:57:16.520792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef92c0 00:21:52.816 [2024-11-29 16:57:16.522201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.816 [2024-11-29 16:57:16.522245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:52.816 [2024-11-29 16:57:16.535354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef9b30 00:21:52.816 [2024-11-29 16:57:16.536941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.816 [2024-11-29 16:57:16.536986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:52.816 [2024-11-29 16:57:16.551082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efa3a0 00:21:52.816 [2024-11-29 16:57:16.552760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.816 [2024-11-29 16:57:16.552805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:52.816 [2024-11-29 16:57:16.566755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efac10 00:21:52.816 [2024-11-29 16:57:16.568272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.816 [2024-11-29 16:57:16.568317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:52.816 [2024-11-29 16:57:16.581775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efb480 00:21:52.816 [2024-11-29 16:57:16.583166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.816 [2024-11-29 16:57:16.583211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:52.816 [2024-11-29 16:57:16.596494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efbcf0 00:21:52.816 [2024-11-29 16:57:16.597844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:52.816 [2024-11-29 16:57:16.597888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.612638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efc560 00:21:53.076 [2024-11-29 16:57:16.614112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.614157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.629436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efcdd0 00:21:53.076 [2024-11-29 16:57:16.630924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.630968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.645367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efd640 00:21:53.076 [2024-11-29 16:57:16.646659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.646703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.659829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efdeb0 00:21:53.076 [2024-11-29 16:57:16.661135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.661180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.674340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efe720 00:21:53.076 [2024-11-29 16:57:16.675657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.675704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.688692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eff3c8 00:21:53.076 [2024-11-29 16:57:16.689971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.690016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.709009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eff3c8 00:21:53.076 [2024-11-29 16:57:16.711286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.711329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.723321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efe720 00:21:53.076 [2024-11-29 16:57:16.725645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.725689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.737808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efdeb0 00:21:53.076 [2024-11-29 16:57:16.740189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.740233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.751974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efd640 00:21:53.076 [2024-11-29 16:57:16.754142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.754184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.765542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efcdd0 00:21:53.076 [2024-11-29 16:57:16.767671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.767716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.779001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efc560 00:21:53.076 [2024-11-29 16:57:16.781223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.781266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.793103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efbcf0 00:21:53.076 [2024-11-29 16:57:16.795376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.795427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.807250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efb480 00:21:53.076 [2024-11-29 16:57:16.809462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.809507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.820977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efac10 00:21:53.076 [2024-11-29 16:57:16.823091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.823134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.834493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efa3a0 00:21:53.076 [2024-11-29 16:57:16.836644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.836687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.848054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef9b30 00:21:53.076 [2024-11-29 16:57:16.850116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.850157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:53.076 [2024-11-29 16:57:16.861594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef92c0 00:21:53.076 [2024-11-29 16:57:16.863870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.076 [2024-11-29 16:57:16.863900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:16.876556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef8a50 00:21:53.336 [2024-11-29 16:57:16.878532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:16.878575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:16.890121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef81e0 00:21:53.336 [2024-11-29 16:57:16.892320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:16.892371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:16.903685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef7970 00:21:53.336 [2024-11-29 16:57:16.905714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:16.905759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:16.917311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef7100 00:21:53.336 [2024-11-29 16:57:16.919239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:16.919282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:16.930769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef6890 00:21:53.336 [2024-11-29 16:57:16.932809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:16.932852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:16.944290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef6020 00:21:53.336 [2024-11-29 16:57:16.946186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:16.946229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:16.957822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef57b0 00:21:53.336 [2024-11-29 16:57:16.959757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:16.959817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:16.971403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef4f40 00:21:53.336 [2024-11-29 16:57:16.973390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:16.973433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:16.985112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef46d0 00:21:53.336 [2024-11-29 16:57:16.987044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:16.987086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:16.998727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef3e60 00:21:53.336 [2024-11-29 16:57:17.000707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:17.000753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:17.012233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef35f0 00:21:53.336 [2024-11-29 16:57:17.014147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:17.014189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:17.025864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef2d80 00:21:53.336 [2024-11-29 16:57:17.027700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:17.027729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:17.039240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef2510 00:21:53.336 [2024-11-29 16:57:17.041140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:17.041183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:17.052770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef1ca0 00:21:53.336 [2024-11-29 16:57:17.054548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:17.054592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:17.066174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef1430 00:21:53.336 [2024-11-29 16:57:17.068169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:17.068211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:17.079928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef0bc0 00:21:53.336 [2024-11-29 16:57:17.081714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:17.081756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:17.093469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef0350 00:21:53.336 [2024-11-29 16:57:17.095178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:17.095221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:17.106997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eefae0 00:21:53.336 [2024-11-29 16:57:17.108854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:17.108898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:53.336 [2024-11-29 16:57:17.120650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eef270 00:21:53.336 [2024-11-29 16:57:17.122407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.336 [2024-11-29 16:57:17.122451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:53.596 [2024-11-29 16:57:17.135123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eeea00 00:21:53.596 [2024-11-29 16:57:17.136996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.596 [2024-11-29 16:57:17.137038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:53.596 [2024-11-29 16:57:17.148858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eee190 00:21:53.596 [2024-11-29 16:57:17.150586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.596 [2024-11-29 16:57:17.150627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:53.596 [2024-11-29 16:57:17.162417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eed920 00:21:53.596 [2024-11-29 16:57:17.164151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.596 [2024-11-29 16:57:17.164210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:53.596 [2024-11-29 16:57:17.176156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eed0b0 00:21:53.596 [2024-11-29 16:57:17.177973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.596 [2024-11-29 16:57:17.178015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:53.596 [2024-11-29 16:57:17.190035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eec840 00:21:53.596 [2024-11-29 16:57:17.191727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.596 [2024-11-29 16:57:17.191809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:53.596 [2024-11-29 16:57:17.203559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eebfd0 00:21:53.596 [2024-11-29 16:57:17.205200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.596 [2024-11-29 16:57:17.205245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:53.596 [2024-11-29 16:57:17.217053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eeb760 00:21:53.596 [2024-11-29 16:57:17.218729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.596 [2024-11-29 16:57:17.218771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:53.596 [2024-11-29 16:57:17.230524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eeaef0 00:21:53.596 [2024-11-29 16:57:17.232194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.596 [2024-11-29 16:57:17.232237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:53.596 [2024-11-29 16:57:17.244127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eea680 00:21:53.596 [2024-11-29 16:57:17.245810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.596 [2024-11-29 16:57:17.245851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:53.596 [2024-11-29 16:57:17.257702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee9e10 00:21:53.596 [2024-11-29 16:57:17.259263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.596 [2024-11-29 16:57:17.259305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:53.596 [2024-11-29 16:57:17.271134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee95a0 00:21:53.596 [2024-11-29 16:57:17.272810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.596 [2024-11-29 16:57:17.272852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:53.596 [2024-11-29 16:57:17.284772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee8d30 00:21:53.596 [2024-11-29 16:57:17.286320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.596 [2024-11-29 16:57:17.286371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:53.596 [2024-11-29 16:57:17.298463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee84c0 00:21:53.596 [2024-11-29 16:57:17.300138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.596 [2024-11-29 16:57:17.300199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:53.596 [2024-11-29 16:57:17.312519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee7c50 00:21:53.596 [2024-11-29 16:57:17.314073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.597 [2024-11-29 16:57:17.314119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:53.597 [2024-11-29 16:57:17.326206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee73e0 00:21:53.597 [2024-11-29 16:57:17.327754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.597 [2024-11-29 16:57:17.327820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:53.597 [2024-11-29 16:57:17.339889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee6b70 00:21:53.597 [2024-11-29 16:57:17.341471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.597 [2024-11-29 16:57:17.341514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:53.597 [2024-11-29 16:57:17.354000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee6300 00:21:53.597 [2024-11-29 16:57:17.355499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.597 [2024-11-29 16:57:17.355541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:53.597 [2024-11-29 16:57:17.367496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee5a90 00:21:53.597 [2024-11-29 16:57:17.368971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.597 [2024-11-29 16:57:17.369015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:53.597 [2024-11-29 16:57:17.381038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee5220 00:21:53.597 [2024-11-29 16:57:17.382577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.597 [2024-11-29 16:57:17.382623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:53.856 [2024-11-29 16:57:17.395807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee49b0 00:21:53.856 [2024-11-29 16:57:17.397275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.856 [2024-11-29 16:57:17.397318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:53.856 [2024-11-29 16:57:17.409399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee4140 00:21:53.856 [2024-11-29 16:57:17.410778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.856 [2024-11-29 16:57:17.410821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:53.856 [2024-11-29 16:57:17.422827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee38d0 00:21:53.856 [2024-11-29 16:57:17.424304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.856 [2024-11-29 16:57:17.424358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:53.856 [2024-11-29 16:57:17.436314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee3060 00:21:53.856 [2024-11-29 16:57:17.437671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.856 [2024-11-29 16:57:17.437714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:53.856 17965.00 IOPS, 70.18 MiB/s [2024-11-29T16:57:17.648Z] [2024-11-29 16:57:17.449965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee27f0 00:21:53.856 [2024-11-29 16:57:17.451282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.856 [2024-11-29 16:57:17.451323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:53.856 [2024-11-29 16:57:17.463366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee1f80 00:21:53.856 [2024-11-29 16:57:17.464739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.856 [2024-11-29 16:57:17.464781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:53.856 [2024-11-29 16:57:17.476924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee1710 00:21:53.856 [2024-11-29 16:57:17.478239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.856 [2024-11-29 16:57:17.478279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:53.856 [2024-11-29 16:57:17.490785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee0ea0 00:21:53.856 [2024-11-29 16:57:17.492208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.856 [2024-11-29 16:57:17.492247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:53.856 [2024-11-29 16:57:17.504514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee0630 00:21:53.856 [2024-11-29 16:57:17.505806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.856 [2024-11-29 16:57:17.505851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:53.856 [2024-11-29 16:57:17.518022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016edfdc0 00:21:53.856 [2024-11-29 16:57:17.519260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.856 [2024-11-29 16:57:17.519302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:53.856 [2024-11-29 16:57:17.531417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016edf550 00:21:53.856 [2024-11-29 16:57:17.532811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.856 [2024-11-29 16:57:17.532854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:53.856 [2024-11-29 16:57:17.545091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016edece0 00:21:53.857 [2024-11-29 16:57:17.546338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.857 [2024-11-29 16:57:17.546408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:53.857 [2024-11-29 16:57:17.558584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ede470 00:21:53.857 [2024-11-29 16:57:17.559851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.857 [2024-11-29 16:57:17.559880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:53.857 [2024-11-29 16:57:17.577501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eddc00 00:21:53.857 [2024-11-29 16:57:17.579700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.857 [2024-11-29 16:57:17.579744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:53.857 [2024-11-29 16:57:17.590827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ede470 00:21:53.857 [2024-11-29 16:57:17.593126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.857 [2024-11-29 16:57:17.593169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:53.857 [2024-11-29 16:57:17.604614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016edece0 00:21:53.857 [2024-11-29 16:57:17.606753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.857 [2024-11-29 16:57:17.606798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:53.857 [2024-11-29 16:57:17.618001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016edf550 00:21:53.857 [2024-11-29 16:57:17.620327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.857 [2024-11-29 16:57:17.620379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:53.857 [2024-11-29 16:57:17.632320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016edfdc0 00:21:53.857 [2024-11-29 16:57:17.634635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.857 [2024-11-29 16:57:17.634679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:54.116 [2024-11-29 16:57:17.649097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee0630 00:21:54.116 [2024-11-29 16:57:17.651819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.116 [2024-11-29 16:57:17.651851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:54.116 [2024-11-29 16:57:17.664910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee0ea0 00:21:54.116 [2024-11-29 16:57:17.667234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.116 [2024-11-29 16:57:17.667277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:54.116 [2024-11-29 16:57:17.678729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee1710 00:21:54.116 [2024-11-29 16:57:17.680945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.116 [2024-11-29 16:57:17.680988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:54.116 [2024-11-29 16:57:17.692555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee1f80 00:21:54.117 [2024-11-29 16:57:17.694613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.694658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:54.117 [2024-11-29 16:57:17.706081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee27f0 00:21:54.117 [2024-11-29 16:57:17.708288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.708333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:54.117 [2024-11-29 16:57:17.719577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee3060 00:21:54.117 [2024-11-29 16:57:17.721629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.721673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:54.117 [2024-11-29 16:57:17.733351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee38d0 00:21:54.117 [2024-11-29 16:57:17.735617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.735663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:54.117 [2024-11-29 16:57:17.748924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee4140 00:21:54.117 [2024-11-29 16:57:17.751264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.751309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:54.117 [2024-11-29 16:57:17.764475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee49b0 00:21:54.117 [2024-11-29 16:57:17.766600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.766644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:54.117 [2024-11-29 16:57:17.779262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee5220 00:21:54.117 [2024-11-29 16:57:17.781386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.781429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:54.117 [2024-11-29 16:57:17.793401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee5a90 00:21:54.117 [2024-11-29 16:57:17.795626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.795669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:54.117 [2024-11-29 16:57:17.808535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee6300 00:21:54.117 [2024-11-29 16:57:17.810586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.810634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:54.117 [2024-11-29 16:57:17.823001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee6b70 00:21:54.117 [2024-11-29 16:57:17.825147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.825190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:54.117 [2024-11-29 16:57:17.837796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee73e0 00:21:54.117 [2024-11-29 16:57:17.839830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.839861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:54.117 [2024-11-29 16:57:17.852390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee7c50 00:21:54.117 [2024-11-29 16:57:17.854350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.854393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:54.117 [2024-11-29 16:57:17.866539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee84c0 00:21:54.117 [2024-11-29 16:57:17.868740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.868784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:54.117 [2024-11-29 16:57:17.880892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee8d30 00:21:54.117 [2024-11-29 16:57:17.882853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.882895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:54.117 [2024-11-29 16:57:17.895415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee95a0 00:21:54.117 [2024-11-29 16:57:17.897408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.117 [2024-11-29 16:57:17.897451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:17.910875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ee9e10 00:21:54.377 [2024-11-29 16:57:17.912999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:17.913043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:17.925620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eea680 00:21:54.377 [2024-11-29 16:57:17.927510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:17.927556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:17.940502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eeaef0 00:21:54.377 [2024-11-29 16:57:17.942399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:17.942445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:17.954829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eeb760 00:21:54.377 [2024-11-29 16:57:17.956725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:17.956767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:17.968637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eebfd0 00:21:54.377 [2024-11-29 16:57:17.970392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:17.970435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:17.982344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eec840 00:21:54.377 [2024-11-29 16:57:17.984243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:17.984286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:17.995945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eed0b0 00:21:54.377 [2024-11-29 16:57:17.997744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:17.997786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:18.009647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eed920 00:21:54.377 [2024-11-29 16:57:18.011353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:18.011404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:18.023168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eee190 00:21:54.377 [2024-11-29 16:57:18.025044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:18.025086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:18.036842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eeea00 00:21:54.377 [2024-11-29 16:57:18.038538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:18.038566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:18.050246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eef270 00:21:54.377 [2024-11-29 16:57:18.052001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:18.052030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:18.063857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016eefae0 00:21:54.377 [2024-11-29 16:57:18.065551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:18.065593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:18.077364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef0350 00:21:54.377 [2024-11-29 16:57:18.079003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:18.079045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:18.090774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef0bc0 00:21:54.377 [2024-11-29 16:57:18.092508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:18.092551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:18.104384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef1430 00:21:54.377 [2024-11-29 16:57:18.106035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:18.106079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:18.118508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef1ca0 00:21:54.377 [2024-11-29 16:57:18.120301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:18.120368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:18.133014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef2510 00:21:54.377 [2024-11-29 16:57:18.134619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:18.134663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:18.146751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef2d80 00:21:54.377 [2024-11-29 16:57:18.148448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:18.148492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:54.377 [2024-11-29 16:57:18.160365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef35f0 00:21:54.377 [2024-11-29 16:57:18.161934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.377 [2024-11-29 16:57:18.161976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.175125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef3e60 00:21:54.637 [2024-11-29 16:57:18.176889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.176933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.188977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef46d0 00:21:54.637 [2024-11-29 16:57:18.190515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.190557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.202541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef4f40 00:21:54.637 [2024-11-29 16:57:18.204100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.204159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.216095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef57b0 00:21:54.637 [2024-11-29 16:57:18.217680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.217723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.229648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef6020 00:21:54.637 [2024-11-29 16:57:18.231109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.231153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.243146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef6890 00:21:54.637 [2024-11-29 16:57:18.244755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.244799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.256684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef7100 00:21:54.637 [2024-11-29 16:57:18.258130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.258173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.272756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef7970 00:21:54.637 [2024-11-29 16:57:18.274471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.274531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.289575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef81e0 00:21:54.637 [2024-11-29 16:57:18.290975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.291002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.303220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef8a50 00:21:54.637 [2024-11-29 16:57:18.304803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.304847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.316946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef92c0 00:21:54.637 [2024-11-29 16:57:18.318364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.318436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.331405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016ef9b30 00:21:54.637 [2024-11-29 16:57:18.332902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.332946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.345830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efa3a0 00:21:54.637 [2024-11-29 16:57:18.347204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.347246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.359413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efac10 00:21:54.637 [2024-11-29 16:57:18.360773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.360816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.372921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efb480 00:21:54.637 [2024-11-29 16:57:18.374266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.374308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.386537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efbcf0 00:21:54.637 [2024-11-29 16:57:18.387895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.387924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.400120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efc560 00:21:54.637 [2024-11-29 16:57:18.401385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.401454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:54.637 [2024-11-29 16:57:18.413654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efcdd0 00:21:54.637 [2024-11-29 16:57:18.414987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.637 [2024-11-29 16:57:18.415029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:54.896 [2024-11-29 16:57:18.427808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efd640 00:21:54.896 [2024-11-29 16:57:18.429131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.896 [2024-11-29 16:57:18.429173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:54.896 [2024-11-29 16:57:18.441961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa330) with pdu=0x200016efdeb0 00:21:54.896 [2024-11-29 16:57:18.443195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.896 [2024-11-29 16:57:18.443239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:54.896 17964.00 IOPS, 70.17 MiB/s 00:21:54.896 Latency(us) 00:21:54.896 [2024-11-29T16:57:18.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.896 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:54.896 nvme0n1 : 2.00 17996.11 70.30 0.00 0.00 7106.97 6166.34 25499.46 00:21:54.896 [2024-11-29T16:57:18.688Z] =================================================================================================================== 00:21:54.896 [2024-11-29T16:57:18.688Z] Total : 17996.11 70.30 0.00 0.00 7106.97 6166.34 25499.46 00:21:54.896 { 00:21:54.896 "results": [ 00:21:54.896 { 00:21:54.896 "job": "nvme0n1", 00:21:54.896 "core_mask": "0x2", 00:21:54.896 "workload": "randwrite", 00:21:54.896 "status": "finished", 00:21:54.896 "queue_depth": 128, 00:21:54.896 "io_size": 4096, 00:21:54.896 "runtime": 2.003544, 00:21:54.896 "iops": 17996.110891500262, 00:21:54.896 "mibps": 70.2973081699229, 00:21:54.896 "io_failed": 0, 00:21:54.896 "io_timeout": 0, 00:21:54.896 "avg_latency_us": 7106.969376928819, 00:21:54.896 "min_latency_us": 6166.341818181818, 00:21:54.896 "max_latency_us": 25499.46181818182 00:21:54.896 } 00:21:54.896 ], 00:21:54.896 "core_count": 1 00:21:54.896 } 00:21:54.896 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:54.896 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:54.896 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:54.896 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:54.896 | .driver_specific 00:21:54.896 | .nvme_error 00:21:54.896 | .status_code 00:21:54.896 | .command_transient_transport_error' 00:21:55.155 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:21:55.155 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96939 00:21:55.155 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96939 ']' 00:21:55.155 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96939 00:21:55.155 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:55.155 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.155 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96939 00:21:55.155 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:55.155 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:55.155 killing process with pid 96939 00:21:55.155 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96939' 00:21:55.155 Received shutdown signal, test time was about 2.000000 seconds 00:21:55.155 00:21:55.155 Latency(us) 00:21:55.155 [2024-11-29T16:57:18.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.155 [2024-11-29T16:57:18.947Z] =================================================================================================================== 00:21:55.155 [2024-11-29T16:57:18.947Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:55.155 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96939 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96939 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96994 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96994 /var/tmp/bperf.sock 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96994 ']' 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.156 16:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:55.156 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:55.156 Zero copy mechanism will not be used. 00:21:55.156 [2024-11-29 16:57:18.935587] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:21:55.156 [2024-11-29 16:57:18.935667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96994 ] 00:21:55.415 [2024-11-29 16:57:19.056868] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:55.415 [2024-11-29 16:57:19.082903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.415 [2024-11-29 16:57:19.103048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.415 [2024-11-29 16:57:19.132661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:55.415 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.415 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:55.415 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:55.415 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:55.674 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:55.674 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.674 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:55.932 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.932 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:55.932 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:56.192 nvme0n1 00:21:56.192 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:56.192 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.192 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:56.192 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.192 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:56.192 16:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:56.192 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:56.192 Zero copy mechanism will not be used. 00:21:56.192 Running I/O for 2 seconds... 00:21:56.192 [2024-11-29 16:57:19.892665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.892747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.892776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.897358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.897429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.897467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.901935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.902078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.902099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.906310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.906453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.906473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.910726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.910812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.910833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.915081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.915211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.915231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.919653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.919744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.919765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.924132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.924215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.924236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.928584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.928690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.928711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.932987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.933072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.933093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.937371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.937501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.937521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.941875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.941972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.941992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.946418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.946515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.946535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.950784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.950904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.950925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.955081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.955210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.955230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.959465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.959598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.959619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.964169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.964269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.964289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.968640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.968714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.968734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.973044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.192 [2024-11-29 16:57:19.973118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.192 [2024-11-29 16:57:19.973139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.192 [2024-11-29 16:57:19.977514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.193 [2024-11-29 16:57:19.977600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.193 [2024-11-29 16:57:19.977619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.193 [2024-11-29 16:57:19.982403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.193 [2024-11-29 16:57:19.982482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.193 [2024-11-29 16:57:19.982502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:19.987075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:19.987184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:19.987204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:19.991518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:19.991604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:19.991624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:19.996032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:19.996101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:19.996154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.000602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.000687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.000708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.005824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.005926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.005947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.010626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.010725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.010746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.015505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.015609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.015631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.020567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.020703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.020742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.025833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.025949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.025971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.030707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.030806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.030827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.035263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.035425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.035447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.039631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.039741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.039760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.044237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.044338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.044359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.048691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.048789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.048810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.053226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.053378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.053399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.057743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.057841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.057861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.062285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.062414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.062434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.066657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.066774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.066794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.071130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.071259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.071279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.075618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.075721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.075752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.080235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.080333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.080353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.084800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.453 [2024-11-29 16:57:20.084898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.453 [2024-11-29 16:57:20.084918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.453 [2024-11-29 16:57:20.089246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.089392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.089412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.093666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.093783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.093803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.098139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.098268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.098288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.102697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.102793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.102814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.107019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.107144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.107164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.111414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.111530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.111550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.115836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.115945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.115966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.120244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.120317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.120338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.124652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.124750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.124770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.129011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.129128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.129147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.133497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.133579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.133599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.137886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.138006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.138026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.142295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.142435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.142456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.146717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.146802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.146821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.151381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.151527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.151550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.156292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.156419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.156455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.161039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.161165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.161185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.166069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.166188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.166210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.171250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.171397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.171419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.176535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.176638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.176674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.181521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.181626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.181647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.186469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.186565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.186585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.191139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.191225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.191246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.196331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.196446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.196466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.201135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.201266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.201287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.205878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.205978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.205998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.210627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.210725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.454 [2024-11-29 16:57:20.210745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.454 [2024-11-29 16:57:20.215167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.454 [2024-11-29 16:57:20.215242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.455 [2024-11-29 16:57:20.215263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.455 [2024-11-29 16:57:20.219773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.455 [2024-11-29 16:57:20.219887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.455 [2024-11-29 16:57:20.219907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.455 [2024-11-29 16:57:20.224545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.455 [2024-11-29 16:57:20.224684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.455 [2024-11-29 16:57:20.224705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.455 [2024-11-29 16:57:20.229209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.455 [2024-11-29 16:57:20.229296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.455 [2024-11-29 16:57:20.229318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.455 [2024-11-29 16:57:20.233772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.455 [2024-11-29 16:57:20.233899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.455 [2024-11-29 16:57:20.233919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.455 [2024-11-29 16:57:20.238531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.455 [2024-11-29 16:57:20.238604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.455 [2024-11-29 16:57:20.238627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.715 [2024-11-29 16:57:20.243658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.715 [2024-11-29 16:57:20.243771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.715 [2024-11-29 16:57:20.243818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.715 [2024-11-29 16:57:20.248593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.715 [2024-11-29 16:57:20.248688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.715 [2024-11-29 16:57:20.248709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.715 [2024-11-29 16:57:20.253302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.715 [2024-11-29 16:57:20.253414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.715 [2024-11-29 16:57:20.253436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.715 [2024-11-29 16:57:20.258082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.715 [2024-11-29 16:57:20.258169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.715 [2024-11-29 16:57:20.258189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.715 [2024-11-29 16:57:20.262761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.715 [2024-11-29 16:57:20.262846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.715 [2024-11-29 16:57:20.262866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.715 [2024-11-29 16:57:20.267413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.715 [2024-11-29 16:57:20.267517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.715 [2024-11-29 16:57:20.267539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.715 [2024-11-29 16:57:20.272330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.715 [2024-11-29 16:57:20.272432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.715 [2024-11-29 16:57:20.272452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.715 [2024-11-29 16:57:20.276877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.715 [2024-11-29 16:57:20.277007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.277027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.281475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.281559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.281579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.286148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.286275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.286295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.290748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.290835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.290855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.295418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.295508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.295529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.300387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.300500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.300520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.305065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.305195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.305216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.309643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.309735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.309756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.314371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.314559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.314580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.318964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.319090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.319110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.323617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.323734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.323754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.328278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.328381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.328404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.333229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.333327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.333364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.337899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.338026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.338046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.342320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.342462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.342482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.346792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.346918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.346937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.351389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.351494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.351514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.355721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.355847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.355868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.360168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.360278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.360298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.364704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.364783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.364803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.369090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.369212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.369232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.373935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.374035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.374054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.378531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.378629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.378649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.382862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.382991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.383010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.387278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.387375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.387395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.391668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.391812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.391832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.396103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.396196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.396216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.716 [2024-11-29 16:57:20.400570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.716 [2024-11-29 16:57:20.400656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.716 [2024-11-29 16:57:20.400676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.405023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.405141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.405161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.409488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.409562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.409582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.413872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.413989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.414010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.418239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.418355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.418375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.422604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.422688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.422708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.426961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.427062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.427082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.431342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.431460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.431480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.435661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.435760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.435804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.440053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.440178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.440198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.444511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.444611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.444631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.448982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.449056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.449076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.453443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.453542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.453562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.457902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.457976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.457997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.462313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.462452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.462472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.466761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.466835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.466855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.471170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.471289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.471308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.475608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.475693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.475713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.480088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.480185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.480205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.484548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.484646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.484667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.488994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.489067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.489086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.493433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.493531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.493551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.497895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.498025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.498044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.717 [2024-11-29 16:57:20.502687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.717 [2024-11-29 16:57:20.502762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.717 [2024-11-29 16:57:20.502783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.978 [2024-11-29 16:57:20.507670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.507820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.507857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.512413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.512555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.512576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.517028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.517150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.517171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.521490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.521609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.521628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.525957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.526077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.526097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.530476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.530577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.530597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.534872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.534945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.534965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.539190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.539318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.539349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.543752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.543887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.543908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.548315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.548431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.548451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.552711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.552801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.552820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.557282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.557410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.557430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.561782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.561866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.561886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.566125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.566257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.566277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.570607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.570706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.570725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.574906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.575036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.575056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.579432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.579562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.579583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.583815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.583902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.583922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.588307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.588425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.588445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.592787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.592861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.592881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.597231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.597389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.597410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.601682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.601796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.601816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.606150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.606269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.606289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.610589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.610663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.610683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.614939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.615013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.615033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.619533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.619618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.619638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.979 [2024-11-29 16:57:20.623840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.979 [2024-11-29 16:57:20.623927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.979 [2024-11-29 16:57:20.623948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.628253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.628358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.628379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.632658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.632764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.632784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.637062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.637163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.637182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.641539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.641637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.641657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.645914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.645987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.646007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.650304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.650458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.650479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.654723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.654830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.654850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.659124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.659209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.659229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.663586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.663659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.663679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.668026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.668093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.668143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.672716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.672832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.672854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.677609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.677706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.677727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.682027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.682154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.682174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.686580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.686660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.686681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.691004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.691125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.691144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.695477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.695562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.695582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.700481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.700597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.700618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.705326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.705505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.705526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.710470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.710598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.710620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.715596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.715720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.715768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.721197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.721282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.721302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.726222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.726311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.726348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.731221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.731307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.731327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.736256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.736367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.736388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.741045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.741176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.741196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.745895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.746026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.746046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.980 [2024-11-29 16:57:20.750365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.980 [2024-11-29 16:57:20.750461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.980 [2024-11-29 16:57:20.750481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.981 [2024-11-29 16:57:20.754732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.981 [2024-11-29 16:57:20.754830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.981 [2024-11-29 16:57:20.754850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.981 [2024-11-29 16:57:20.759124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.981 [2024-11-29 16:57:20.759250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.981 [2024-11-29 16:57:20.759270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.981 [2024-11-29 16:57:20.763656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:56.981 [2024-11-29 16:57:20.763767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.981 [2024-11-29 16:57:20.763814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.241 [2024-11-29 16:57:20.768538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.241 [2024-11-29 16:57:20.768652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.241 [2024-11-29 16:57:20.768672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.241 [2024-11-29 16:57:20.773043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.241 [2024-11-29 16:57:20.773171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.241 [2024-11-29 16:57:20.773191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.241 [2024-11-29 16:57:20.777795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.241 [2024-11-29 16:57:20.777913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.241 [2024-11-29 16:57:20.777934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.241 [2024-11-29 16:57:20.782257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.241 [2024-11-29 16:57:20.782401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.782421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.786642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.786714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.786733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.790951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.791025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.791044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.795388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.795520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.795540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.799847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.799963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.799984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.804375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.804489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.804509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.808853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.808951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.808971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.813261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.813399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.813419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.817642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.817740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.817760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.822157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.822241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.822261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.826653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.826792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.826813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.831130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.831261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.831281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.835649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.835742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.835763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.840129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.840215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.840235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.844531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.844625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.844646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.848969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.849044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.849064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.853478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.853564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.853583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.857828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.857954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.857974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.862222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.862372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.862392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.866734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.866830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.866850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.871187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.871299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.871318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.875717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.875814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.875834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.880250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.880358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.880379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.242 6720.00 IOPS, 840.00 MiB/s [2024-11-29T16:57:21.034Z] [2024-11-29 16:57:20.885742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.885858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.885878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.890190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.890308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.890328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.894683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.894769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.894789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.899257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.899343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.242 [2024-11-29 16:57:20.899363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.242 [2024-11-29 16:57:20.903719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.242 [2024-11-29 16:57:20.903839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.903859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.908207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.908292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.908313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.912639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.912747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.912766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.916988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.917106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.917125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.921380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.921516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.921536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.925743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.925827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.925847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.930108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.930233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.930254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.934505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.934578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.934599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.938830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.938960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.938980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.943277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.943434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.943454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.947890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.948000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.948020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.952324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.952495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.952516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.956811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.956911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.956931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.961303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.961423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.961444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.965959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.966038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.966058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.970397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.970495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.970515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.974818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.974945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.974965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.979254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.979383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.979416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.983652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.983749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.983769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.988095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.988239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.988259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.992571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.992696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.992716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:20.996955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:20.997060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:20.997079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:21.001565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:21.001674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:21.001695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:21.006014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:21.006263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:21.006284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:21.011286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:21.011396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:21.011427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:21.015631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:21.015728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:21.015748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:21.020031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:21.020146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.243 [2024-11-29 16:57:21.020166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.243 [2024-11-29 16:57:21.024559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.243 [2024-11-29 16:57:21.024642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.244 [2024-11-29 16:57:21.024663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.244 [2024-11-29 16:57:21.029359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.244 [2024-11-29 16:57:21.029473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.244 [2024-11-29 16:57:21.029495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.504 [2024-11-29 16:57:21.034184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.504 [2024-11-29 16:57:21.034446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.504 [2024-11-29 16:57:21.034478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.504 [2024-11-29 16:57:21.039179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.504 [2024-11-29 16:57:21.039260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.504 [2024-11-29 16:57:21.039280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.504 [2024-11-29 16:57:21.043739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.504 [2024-11-29 16:57:21.043866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.504 [2024-11-29 16:57:21.043888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.504 [2024-11-29 16:57:21.048298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.504 [2024-11-29 16:57:21.048418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.504 [2024-11-29 16:57:21.048452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.504 [2024-11-29 16:57:21.052816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.504 [2024-11-29 16:57:21.052938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.504 [2024-11-29 16:57:21.052957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.504 [2024-11-29 16:57:21.057200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.504 [2024-11-29 16:57:21.057321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.504 [2024-11-29 16:57:21.057369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.504 [2024-11-29 16:57:21.061763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.504 [2024-11-29 16:57:21.061843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.504 [2024-11-29 16:57:21.061863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.504 [2024-11-29 16:57:21.066173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.504 [2024-11-29 16:57:21.066441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.504 [2024-11-29 16:57:21.066463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.504 [2024-11-29 16:57:21.070952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.071071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.071091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.075349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.075448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.075468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.079709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.079816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.079836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.084247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.084326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.084346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.088679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.088779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.088798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.093036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.093157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.093177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.097550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.097644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.097663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.102018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.102251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.102272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.106758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.106880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.106900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.111245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.111371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.111392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.115744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.115870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.115890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.120233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.120355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.120391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.124601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.124735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.124755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.128967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.129058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.129078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.133471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.133603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.133623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.137860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.137952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.137972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.142337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.142431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.142452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.146776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.146854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.146874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.151291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.151401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.151421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.155752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.155861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.155882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.160211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.160308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.160327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.164752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.164849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.164868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.169345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.169595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.169616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.174032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.174162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.174182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.178500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.178601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.178621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.182958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.183056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.183076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.187426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.187555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.505 [2024-11-29 16:57:21.187575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.505 [2024-11-29 16:57:21.191766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.505 [2024-11-29 16:57:21.191873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.191893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.196261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.196374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.196395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.200744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.200824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.200843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.205314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.205463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.205483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.209709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.209789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.209809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.214165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.214263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.214283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.218558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.218637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.218657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.222948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.223050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.223070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.227401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.227510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.227530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.231710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.231831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.231851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.236185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.236265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.236284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.240659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.240724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.240744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.245067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.245193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.245213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.249551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.249646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.249666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.253942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.254023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.254043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.258312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.258486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.258507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.262758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.262886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.262905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.267106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.267232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.267253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.271641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.271750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.271770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.276043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.276309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.276330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.280837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.280935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.280955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.285198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.285324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.285356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.506 [2024-11-29 16:57:21.289787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.506 [2024-11-29 16:57:21.289886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.506 [2024-11-29 16:57:21.289907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.294698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.294792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.294813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.299185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.299309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.299330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.304047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.304413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.304435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.308796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.308895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.308916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.313273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.313430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.313450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.317714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.317805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.317826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.322203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.322308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.322327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.326727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.326806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.326825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.331106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.331202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.331222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.336183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.336276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.336295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.341225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.341306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.341326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.346192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.346308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.346329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.351528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.351627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.351652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.356907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.357008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.357030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.362040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.362175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.362195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.367140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.367432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.367470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.372353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.372495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.372533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.377251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.377412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.377435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.382305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.382509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.382531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.387079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.387332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.387371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.767 [2024-11-29 16:57:21.392251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.767 [2024-11-29 16:57:21.392364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.767 [2024-11-29 16:57:21.392401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.397045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.397126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.397146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.401924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.402024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.402044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.406569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.406650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.406670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.411218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.411493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.411516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.416098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.416185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.416205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.420902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.421005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.421025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.425464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.425558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.425579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.429982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.430062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.430082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.434637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.434718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.434738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.439374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.439506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.439527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.443963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.444050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.444070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.448579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.448672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.448692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.453148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.453229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.453249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.457924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.458024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.458045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.462536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.462636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.462657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.467034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.467309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.467331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.472277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.472386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.472407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.476939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.477031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.477051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.481538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.481619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.481639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.486156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.486263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.486285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.490825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.490947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.490968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.495468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.495566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.495587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.499976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.500044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.500065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.504836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.504919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.504938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.509492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.509591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.509611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.514077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.514175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.514195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.768 [2024-11-29 16:57:21.518809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.768 [2024-11-29 16:57:21.518932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.768 [2024-11-29 16:57:21.518953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.769 [2024-11-29 16:57:21.523456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.769 [2024-11-29 16:57:21.523553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.769 [2024-11-29 16:57:21.523574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.769 [2024-11-29 16:57:21.528254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.769 [2024-11-29 16:57:21.528335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.769 [2024-11-29 16:57:21.528357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.769 [2024-11-29 16:57:21.533118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.769 [2024-11-29 16:57:21.533270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.769 [2024-11-29 16:57:21.533290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:57.769 [2024-11-29 16:57:21.537988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.769 [2024-11-29 16:57:21.538079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.769 [2024-11-29 16:57:21.538099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:57.769 [2024-11-29 16:57:21.542647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.769 [2024-11-29 16:57:21.542744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.769 [2024-11-29 16:57:21.542765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:57.769 [2024-11-29 16:57:21.547077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.769 [2024-11-29 16:57:21.547192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.769 [2024-11-29 16:57:21.547212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:57.769 [2024-11-29 16:57:21.551669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:57.769 [2024-11-29 16:57:21.551803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.769 [2024-11-29 16:57:21.551824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.029 [2024-11-29 16:57:21.556820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.029 [2024-11-29 16:57:21.556924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.029 [2024-11-29 16:57:21.556946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.029 [2024-11-29 16:57:21.561472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.029 [2024-11-29 16:57:21.561612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.029 [2024-11-29 16:57:21.561633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.029 [2024-11-29 16:57:21.566280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.029 [2024-11-29 16:57:21.566394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.029 [2024-11-29 16:57:21.566415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.029 [2024-11-29 16:57:21.570887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.029 [2024-11-29 16:57:21.570983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.029 [2024-11-29 16:57:21.571004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.029 [2024-11-29 16:57:21.575412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.029 [2024-11-29 16:57:21.575527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.029 [2024-11-29 16:57:21.575547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.029 [2024-11-29 16:57:21.579884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.029 [2024-11-29 16:57:21.580019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.580040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.584465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.584616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.584636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.589011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.589149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.589169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.593612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.593710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.593745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.598034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.598150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.598170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.602702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.602794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.602815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.607163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.607259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.607279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.611573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.611687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.611707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.616272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.616418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.616440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.620992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.621110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.621130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.625526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.625625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.625646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.629869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.630020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.630040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.634445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.634558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.634578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.638916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.639054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.639074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.643563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.643680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.643701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.648188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.648295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.648315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.652714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.652828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.652848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.657195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.657354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.657388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.661683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.661819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.661839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.666247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.666345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.666366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.670687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.670784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.670804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.675032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.675173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.675193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.679635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.679737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.679757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.684216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.684308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.684328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.688811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.688928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.688951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.693275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.693448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.693469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.697919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.698016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.698038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.030 [2024-11-29 16:57:21.702446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.030 [2024-11-29 16:57:21.702562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.030 [2024-11-29 16:57:21.702582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.706956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.707053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.707072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.711441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.711558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.711581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.716025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.716137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.716158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.720525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.720633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.720653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.725612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.725768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.725789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.730849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.730964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.730985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.736018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.736143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.736164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.741381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.741553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.741575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.746601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.746766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.746786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.751627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.751756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.751784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.756762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.756879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.756899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.761785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.761914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.761934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.766824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.766920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.766940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.771631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.771772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.771816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.776719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.776843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.776863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.781531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.781640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.781660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.786031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.786175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.786195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.790556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.790671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.790691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.795056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.795199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.795218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.799683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.799769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.799810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.804321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.804476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.804495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.808852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.808948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.808968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.813252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.813409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.813429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.031 [2024-11-29 16:57:21.818345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.031 [2024-11-29 16:57:21.818437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.031 [2024-11-29 16:57:21.818457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.291 [2024-11-29 16:57:21.822966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.291 [2024-11-29 16:57:21.823080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.291 [2024-11-29 16:57:21.823100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.291 [2024-11-29 16:57:21.827864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.291 [2024-11-29 16:57:21.827960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.291 [2024-11-29 16:57:21.827983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.291 [2024-11-29 16:57:21.832608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.291 [2024-11-29 16:57:21.832709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.291 [2024-11-29 16:57:21.832729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.291 [2024-11-29 16:57:21.837182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.291 [2024-11-29 16:57:21.837333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.291 [2024-11-29 16:57:21.837366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.291 [2024-11-29 16:57:21.841750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.291 [2024-11-29 16:57:21.841887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.291 [2024-11-29 16:57:21.841907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.291 [2024-11-29 16:57:21.846370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.291 [2024-11-29 16:57:21.846509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.291 [2024-11-29 16:57:21.846528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.291 [2024-11-29 16:57:21.850963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.291 [2024-11-29 16:57:21.851080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.291 [2024-11-29 16:57:21.851100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.291 [2024-11-29 16:57:21.855453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.291 [2024-11-29 16:57:21.855576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.291 [2024-11-29 16:57:21.855597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.291 [2024-11-29 16:57:21.860174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.291 [2024-11-29 16:57:21.860269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.291 [2024-11-29 16:57:21.860289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.291 [2024-11-29 16:57:21.864822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.291 [2024-11-29 16:57:21.864922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.291 [2024-11-29 16:57:21.864942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.291 [2024-11-29 16:57:21.869555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.291 [2024-11-29 16:57:21.869652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.291 [2024-11-29 16:57:21.869673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:58.291 [2024-11-29 16:57:21.873988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.291 [2024-11-29 16:57:21.874131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.291 [2024-11-29 16:57:21.874152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:58.291 [2024-11-29 16:57:21.878416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.291 [2024-11-29 16:57:21.878529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.291 [2024-11-29 16:57:21.878549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:58.291 [2024-11-29 16:57:21.883221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbfa670) with pdu=0x200016eff3c8 00:21:58.291 [2024-11-29 16:57:21.883354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.291 [2024-11-29 16:57:21.883388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:58.291 6712.00 IOPS, 839.00 MiB/s 00:21:58.291 Latency(us) 00:21:58.291 [2024-11-29T16:57:22.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.291 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:58.291 nvme0n1 : 2.00 6710.72 838.84 0.00 0.00 2378.87 1422.43 11796.48 00:21:58.291 [2024-11-29T16:57:22.083Z] =================================================================================================================== 00:21:58.291 [2024-11-29T16:57:22.083Z] Total : 6710.72 838.84 0.00 0.00 2378.87 1422.43 11796.48 00:21:58.291 { 00:21:58.291 "results": [ 00:21:58.291 { 00:21:58.291 "job": "nvme0n1", 00:21:58.291 "core_mask": "0x2", 00:21:58.291 "workload": "randwrite", 00:21:58.292 "status": "finished", 00:21:58.292 "queue_depth": 16, 00:21:58.292 "io_size": 131072, 00:21:58.292 "runtime": 2.003659, 00:21:58.292 "iops": 6710.722732760415, 00:21:58.292 "mibps": 838.8403415950519, 00:21:58.292 "io_failed": 0, 00:21:58.292 "io_timeout": 0, 00:21:58.292 "avg_latency_us": 2378.8713718172353, 00:21:58.292 "min_latency_us": 1422.429090909091, 00:21:58.292 "max_latency_us": 11796.48 00:21:58.292 } 00:21:58.292 ], 00:21:58.292 "core_count": 1 00:21:58.292 } 00:21:58.292 16:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:58.292 16:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:58.292 16:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:58.292 16:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:58.292 | .driver_specific 00:21:58.292 | .nvme_error 00:21:58.292 | .status_code 00:21:58.292 | .command_transient_transport_error' 00:21:58.551 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 434 > 0 )) 00:21:58.551 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96994 00:21:58.551 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96994 ']' 00:21:58.551 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96994 00:21:58.551 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:58.551 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.551 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96994 00:21:58.551 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:58.551 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:58.551 killing process with pid 96994 00:21:58.551 Received shutdown signal, test time was about 2.000000 seconds 00:21:58.551 00:21:58.551 Latency(us) 00:21:58.551 [2024-11-29T16:57:22.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.551 [2024-11-29T16:57:22.343Z] =================================================================================================================== 00:21:58.551 [2024-11-29T16:57:22.343Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:58.551 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96994' 00:21:58.551 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96994 00:21:58.551 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96994 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 96806 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96806 ']' 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96806 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96806 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:58.811 killing process with pid 96806 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96806' 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96806 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96806 00:21:58.811 00:21:58.811 real 0m15.774s 00:21:58.811 user 0m30.409s 00:21:58.811 sys 0m4.315s 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:58.811 ************************************ 00:21:58.811 END TEST nvmf_digest_error 00:21:58.811 ************************************ 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.811 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.070 rmmod nvme_tcp 00:21:59.070 rmmod nvme_fabrics 00:21:59.070 rmmod nvme_keyring 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 96806 ']' 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 96806 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 96806 ']' 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 96806 00:21:59.070 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (96806) - No such process 00:21:59.070 Process with pid 96806 is not found 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 96806 is not found' 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.070 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.329 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:59.329 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.329 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.329 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.329 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:21:59.329 00:21:59.329 real 0m33.520s 00:21:59.329 user 1m2.914s 00:21:59.329 sys 0m9.161s 00:21:59.329 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.329 16:57:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:59.329 ************************************ 00:21:59.329 END TEST nvmf_digest 00:21:59.329 ************************************ 00:21:59.329 16:57:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:59.329 16:57:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:59.329 16:57:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:59.329 16:57:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:59.329 16:57:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.329 16:57:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.329 ************************************ 00:21:59.329 START TEST nvmf_host_multipath 00:21:59.329 ************************************ 00:21:59.329 16:57:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:59.329 * Looking for test storage... 00:21:59.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:59.329 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.589 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.589 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:59.589 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:59.589 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:59.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.590 --rc genhtml_branch_coverage=1 00:21:59.590 --rc genhtml_function_coverage=1 00:21:59.590 --rc genhtml_legend=1 00:21:59.590 --rc geninfo_all_blocks=1 00:21:59.590 --rc geninfo_unexecuted_blocks=1 00:21:59.590 00:21:59.590 ' 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:59.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.590 --rc genhtml_branch_coverage=1 00:21:59.590 --rc genhtml_function_coverage=1 00:21:59.590 --rc genhtml_legend=1 00:21:59.590 --rc geninfo_all_blocks=1 00:21:59.590 --rc geninfo_unexecuted_blocks=1 00:21:59.590 00:21:59.590 ' 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:59.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.590 --rc genhtml_branch_coverage=1 00:21:59.590 --rc genhtml_function_coverage=1 00:21:59.590 --rc genhtml_legend=1 00:21:59.590 --rc geninfo_all_blocks=1 00:21:59.590 --rc geninfo_unexecuted_blocks=1 00:21:59.590 00:21:59.590 ' 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:59.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.590 --rc genhtml_branch_coverage=1 00:21:59.590 --rc genhtml_function_coverage=1 00:21:59.590 --rc genhtml_legend=1 00:21:59.590 --rc geninfo_all_blocks=1 00:21:59.590 --rc geninfo_unexecuted_blocks=1 00:21:59.590 00:21:59.590 ' 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.590 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.590 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:59.591 Cannot find device "nvmf_init_br" 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:59.591 Cannot find device "nvmf_init_br2" 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:59.591 Cannot find device "nvmf_tgt_br" 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.591 Cannot find device "nvmf_tgt_br2" 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:59.591 Cannot find device "nvmf_init_br" 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:59.591 Cannot find device "nvmf_init_br2" 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:59.591 Cannot find device "nvmf_tgt_br" 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:59.591 Cannot find device "nvmf_tgt_br2" 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:59.591 Cannot find device "nvmf_br" 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:59.591 Cannot find device "nvmf_init_if" 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:59.591 Cannot find device "nvmf_init_if2" 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:59.591 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:59.850 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:59.850 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:59.850 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:59.850 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:59.850 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:59.850 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:59.850 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:59.850 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:59.850 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:59.850 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:59.850 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.850 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.850 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.850 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:59.851 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.851 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:21:59.851 00:21:59.851 --- 10.0.0.3 ping statistics --- 00:21:59.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.851 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:59.851 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:59.851 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:21:59.851 00:21:59.851 --- 10.0.0.4 ping statistics --- 00:21:59.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.851 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:21:59.851 00:21:59.851 --- 10.0.0.1 ping statistics --- 00:21:59.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.851 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:59.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:21:59.851 00:21:59.851 --- 10.0.0.2 ping statistics --- 00:21:59.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.851 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=97303 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 97303 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 97303 ']' 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.851 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:00.110 [2024-11-29 16:57:23.646061] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:22:00.110 [2024-11-29 16:57:23.646155] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.110 [2024-11-29 16:57:23.773466] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:00.110 [2024-11-29 16:57:23.806388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:00.110 [2024-11-29 16:57:23.830785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.110 [2024-11-29 16:57:23.830852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.110 [2024-11-29 16:57:23.830865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.110 [2024-11-29 16:57:23.830875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.110 [2024-11-29 16:57:23.830884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.110 [2024-11-29 16:57:23.835371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.110 [2024-11-29 16:57:23.835438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.110 [2024-11-29 16:57:23.871923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:00.369 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.369 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:22:00.369 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:00.369 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:00.369 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:00.369 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.369 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=97303 00:22:00.369 16:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:00.628 [2024-11-29 16:57:24.256923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.628 16:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:00.887 Malloc0 00:22:00.887 16:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:01.146 16:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:01.406 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:01.664 [2024-11-29 16:57:25.385576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:01.664 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:01.923 [2024-11-29 16:57:25.605651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:01.923 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:01.923 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=97351 00:22:01.923 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:01.923 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 97351 /var/tmp/bdevperf.sock 00:22:01.923 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 97351 ']' 00:22:01.923 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.923 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.923 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.923 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.923 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:02.182 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.182 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:22:02.182 16:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:02.441 16:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:02.700 Nvme0n1 00:22:02.700 16:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:03.269 Nvme0n1 00:22:03.269 16:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:22:03.269 16:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:04.204 16:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:22:04.204 16:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:04.463 16:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:04.722 16:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:22:04.722 16:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97390 00:22:04.722 16:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:04.722 16:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97303 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:11.289 Attaching 4 probes... 00:22:11.289 @path[10.0.0.3, 4421]: 19712 00:22:11.289 @path[10.0.0.3, 4421]: 20376 00:22:11.289 @path[10.0.0.3, 4421]: 20368 00:22:11.289 @path[10.0.0.3, 4421]: 20381 00:22:11.289 @path[10.0.0.3, 4421]: 20120 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97390 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:11.289 16:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:11.546 16:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:11.546 16:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97509 00:22:11.546 16:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:11.546 16:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97303 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:18.110 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:18.110 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:18.110 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:18.110 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:18.110 Attaching 4 probes... 00:22:18.110 @path[10.0.0.3, 4420]: 20175 00:22:18.110 @path[10.0.0.3, 4420]: 20423 00:22:18.110 @path[10.0.0.3, 4420]: 20413 00:22:18.110 @path[10.0.0.3, 4420]: 20392 00:22:18.110 @path[10.0.0.3, 4420]: 20482 00:22:18.110 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:18.110 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:18.110 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:18.110 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:18.110 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:18.110 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:18.110 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97509 00:22:18.110 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:18.110 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:18.111 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:18.111 16:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:18.370 16:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:18.370 16:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97303 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:18.370 16:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97623 00:22:18.370 16:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:24.933 Attaching 4 probes... 00:22:24.933 @path[10.0.0.3, 4421]: 15882 00:22:24.933 @path[10.0.0.3, 4421]: 19712 00:22:24.933 @path[10.0.0.3, 4421]: 19833 00:22:24.933 @path[10.0.0.3, 4421]: 19978 00:22:24.933 @path[10.0.0.3, 4421]: 19824 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97623 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:24.933 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:25.190 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:25.191 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97737 00:22:25.191 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:25.191 16:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97303 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:31.772 16:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:31.772 16:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:31.772 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:31.772 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:31.772 Attaching 4 probes... 00:22:31.772 00:22:31.772 00:22:31.772 00:22:31.772 00:22:31.772 00:22:31.772 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:31.772 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:31.772 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:31.772 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:31.773 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:31.773 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:31.773 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97737 00:22:31.773 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:31.773 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:31.773 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:31.773 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:32.041 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:32.041 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97855 00:22:32.041 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:32.041 16:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97303 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:38.612 16:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:38.612 16:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:38.612 16:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:38.612 16:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:38.612 Attaching 4 probes... 00:22:38.612 @path[10.0.0.3, 4421]: 19186 00:22:38.612 @path[10.0.0.3, 4421]: 19497 00:22:38.612 @path[10.0.0.3, 4421]: 19479 00:22:38.612 @path[10.0.0.3, 4421]: 19447 00:22:38.612 @path[10.0.0.3, 4421]: 19621 00:22:38.612 16:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:38.612 16:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:38.612 16:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:38.612 16:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:38.612 16:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:38.612 16:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:38.612 16:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97855 00:22:38.612 16:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:38.612 16:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:38.612 [2024-11-29 16:58:02.236702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977a80 is same with the state(6) to be set 00:22:38.612 [2024-11-29 16:58:02.236765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977a80 is same with the state(6) to be set 00:22:38.612 16:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:39.551 16:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:39.551 16:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97973 00:22:39.551 16:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97303 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:39.551 16:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:46.118 16:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:46.119 16:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:46.119 16:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:46.119 16:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:46.119 Attaching 4 probes... 00:22:46.119 @path[10.0.0.3, 4420]: 19263 00:22:46.119 @path[10.0.0.3, 4420]: 19642 00:22:46.119 @path[10.0.0.3, 4420]: 19607 00:22:46.119 @path[10.0.0.3, 4420]: 19726 00:22:46.119 @path[10.0.0.3, 4420]: 19764 00:22:46.119 16:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:46.119 16:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:46.119 16:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:46.119 16:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:46.119 16:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:46.119 16:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:46.119 16:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97973 00:22:46.119 16:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:46.119 16:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:46.119 [2024-11-29 16:58:09.766673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:46.119 16:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:46.378 16:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:52.939 16:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:52.939 16:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98153 00:22:52.939 16:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97303 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:52.939 16:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:59.516 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:59.516 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:59.517 Attaching 4 probes... 00:22:59.517 @path[10.0.0.3, 4421]: 19245 00:22:59.517 @path[10.0.0.3, 4421]: 19558 00:22:59.517 @path[10.0.0.3, 4421]: 19583 00:22:59.517 @path[10.0.0.3, 4421]: 19604 00:22:59.517 @path[10.0.0.3, 4421]: 19585 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98153 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 97351 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 97351 ']' 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 97351 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97351 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97351' 00:22:59.517 killing process with pid 97351 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 97351 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 97351 00:22:59.517 { 00:22:59.517 "results": [ 00:22:59.517 { 00:22:59.517 "job": "Nvme0n1", 00:22:59.517 "core_mask": "0x4", 00:22:59.517 "workload": "verify", 00:22:59.517 "status": "terminated", 00:22:59.517 "verify_range": { 00:22:59.517 "start": 0, 00:22:59.517 "length": 16384 00:22:59.517 }, 00:22:59.517 "queue_depth": 128, 00:22:59.517 "io_size": 4096, 00:22:59.517 "runtime": 55.505267, 00:22:59.517 "iops": 8397.113016319694, 00:22:59.517 "mibps": 32.801222719998805, 00:22:59.517 "io_failed": 0, 00:22:59.517 "io_timeout": 0, 00:22:59.517 "avg_latency_us": 15214.225988573266, 00:22:59.517 "min_latency_us": 718.6618181818181, 00:22:59.517 "max_latency_us": 7046430.72 00:22:59.517 } 00:22:59.517 ], 00:22:59.517 "core_count": 1 00:22:59.517 } 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 97351 00:22:59.517 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:59.517 [2024-11-29 16:57:25.662511] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:22:59.517 [2024-11-29 16:57:25.662603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97351 ] 00:22:59.517 [2024-11-29 16:57:25.781451] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:59.517 [2024-11-29 16:57:25.807288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.517 [2024-11-29 16:57:25.828271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.517 [2024-11-29 16:57:25.859511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:59.517 Running I/O for 90 seconds... 00:22:59.517 7828.00 IOPS, 30.58 MiB/s [2024-11-29T16:58:23.309Z] 8678.50 IOPS, 33.90 MiB/s [2024-11-29T16:58:23.309Z] 9137.33 IOPS, 35.69 MiB/s [2024-11-29T16:58:23.309Z] 9407.00 IOPS, 36.75 MiB/s [2024-11-29T16:58:23.309Z] 9557.00 IOPS, 37.33 MiB/s [2024-11-29T16:58:23.309Z] 9666.00 IOPS, 37.76 MiB/s [2024-11-29T16:58:23.309Z] 9726.00 IOPS, 37.99 MiB/s [2024-11-29T16:58:23.309Z] 9743.50 IOPS, 38.06 MiB/s [2024-11-29T16:58:23.309Z] [2024-11-29 16:57:35.246935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.517 [2024-11-29 16:57:35.246991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.517 [2024-11-29 16:57:35.247057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.517 [2024-11-29 16:57:35.247077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.517 [2024-11-29 16:57:35.247099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.517 [2024-11-29 16:57:35.247113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.517 [2024-11-29 16:57:35.247133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.517 [2024-11-29 16:57:35.247147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.517 [2024-11-29 16:57:35.247166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.517 [2024-11-29 16:57:35.247180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.517 [2024-11-29 16:57:35.247199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.517 [2024-11-29 16:57:35.247212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.517 [2024-11-29 16:57:35.247232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.517 [2024-11-29 16:57:35.247246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.517 [2024-11-29 16:57:35.247265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.517 [2024-11-29 16:57:35.247278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.517 [2024-11-29 16:57:35.247297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.518 [2024-11-29 16:57:35.247311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.247388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.518 [2024-11-29 16:57:35.247406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.247426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.518 [2024-11-29 16:57:35.247440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.247460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.247474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.247494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.247508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.247527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.247541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.247560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.247574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.247594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.247608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.248036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.248080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.248146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.248195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.248228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.248276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.248310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.248343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.248392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.248443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.248477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.248511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.248545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.518 [2024-11-29 16:57:35.248579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.518 [2024-11-29 16:57:35.248614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.518 [2024-11-29 16:57:35.248662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.518 [2024-11-29 16:57:35.248697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.518 [2024-11-29 16:57:35.248739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.518 [2024-11-29 16:57:35.248789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.518 [2024-11-29 16:57:35.248808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.518 [2024-11-29 16:57:35.248822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.248842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.248856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.248875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.248889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.248910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.248924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.248943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.248957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.248975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.248989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.249022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.249055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.249088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.249121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.249162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.249198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.249243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.249279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.249312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.249362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.249395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.249428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.519 [2024-11-29 16:57:35.249462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.519 [2024-11-29 16:57:35.249511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.519 [2024-11-29 16:57:35.249545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.519 [2024-11-29 16:57:35.249584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.519 [2024-11-29 16:57:35.249625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.519 [2024-11-29 16:57:35.249671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.519 [2024-11-29 16:57:35.249705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.519 [2024-11-29 16:57:35.249739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.519 [2024-11-29 16:57:35.249773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.519 [2024-11-29 16:57:35.249807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.519 [2024-11-29 16:57:35.249828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-11-29 16:57:35.249843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.249862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.249877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.249896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.249910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.249930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.249944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.249963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.249978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.249998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-11-29 16:57:35.250445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-11-29 16:57:35.250479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-11-29 16:57:35.250522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-11-29 16:57:35.250557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-11-29 16:57:35.250590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-11-29 16:57:35.250625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-11-29 16:57:35.250666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.520 [2024-11-29 16:57:35.250700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.520 [2024-11-29 16:57:35.250804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.520 [2024-11-29 16:57:35.250818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.250839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.250853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.250873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.250887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.250907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.250922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.250942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.250963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.250984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.521 [2024-11-29 16:57:35.251657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.521 [2024-11-29 16:57:35.251693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.521 [2024-11-29 16:57:35.251727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.521 [2024-11-29 16:57:35.251761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.521 [2024-11-29 16:57:35.251826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.521 [2024-11-29 16:57:35.251862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.521 [2024-11-29 16:57:35.251914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.521 [2024-11-29 16:57:35.251952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.251972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.251987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.252007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.252022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.252042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.252056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.252076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.521 [2024-11-29 16:57:35.252091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.521 [2024-11-29 16:57:35.252122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.522 [2024-11-29 16:57:35.252136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:35.252156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.522 [2024-11-29 16:57:35.252170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:35.252190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.522 [2024-11-29 16:57:35.252204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:35.252225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.522 [2024-11-29 16:57:35.252239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.522 9712.78 IOPS, 37.94 MiB/s [2024-11-29T16:58:23.314Z] 9771.00 IOPS, 38.17 MiB/s [2024-11-29T16:58:23.314Z] 9811.00 IOPS, 38.32 MiB/s [2024-11-29T16:58:23.314Z] 9845.42 IOPS, 38.46 MiB/s [2024-11-29T16:58:23.314Z] 9873.54 IOPS, 38.57 MiB/s [2024-11-29T16:58:23.314Z] 9893.79 IOPS, 38.65 MiB/s [2024-11-29T16:58:23.314Z] [2024-11-29 16:57:41.778954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.522 [2024-11-29 16:57:41.779556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.522 [2024-11-29 16:57:41.779602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.522 [2024-11-29 16:57:41.779638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.522 [2024-11-29 16:57:41.779672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.522 [2024-11-29 16:57:41.779707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.522 [2024-11-29 16:57:41.779741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.522 [2024-11-29 16:57:41.779774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.522 [2024-11-29 16:57:41.779840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.779950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.779987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.780006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.780027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.780042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.780062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.780084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.780106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.780135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.522 [2024-11-29 16:57:41.780155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.522 [2024-11-29 16:57:41.780169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.523 [2024-11-29 16:57:41.780630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.523 [2024-11-29 16:57:41.780664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.523 [2024-11-29 16:57:41.780697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.523 [2024-11-29 16:57:41.780730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.523 [2024-11-29 16:57:41.780763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.523 [2024-11-29 16:57:41.780798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.523 [2024-11-29 16:57:41.780831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.523 [2024-11-29 16:57:41.780865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.780972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.780992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.523 [2024-11-29 16:57:41.781519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.523 [2024-11-29 16:57:41.781539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.523 [2024-11-29 16:57:41.781553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.781573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.781588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.781607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.781621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.781641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.781655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.781674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.781688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.781708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.781722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.781742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.781755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.781774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.781788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.781808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.781829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.781851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.781866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.781886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.781901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.781920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.781934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.781954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.781968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.781988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.782002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.782036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.782069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.782104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.782138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.782171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.782205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.782244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.782279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.782313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.782379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.782431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.782472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.524 [2024-11-29 16:57:41.782508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.782543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.782578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.782613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.782648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.782683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.782718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.782777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.782812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.782846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.782881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.782915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.524 [2024-11-29 16:57:41.782934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.524 [2024-11-29 16:57:41.782949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.782969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.525 [2024-11-29 16:57:41.782983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.783003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.525 [2024-11-29 16:57:41.783018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.783040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.525 [2024-11-29 16:57:41.783054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.783075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.525 [2024-11-29 16:57:41.783089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.783109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.525 [2024-11-29 16:57:41.783124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.783143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.525 [2024-11-29 16:57:41.783158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.783184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.525 [2024-11-29 16:57:41.783199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.783219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.525 [2024-11-29 16:57:41.783234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.783989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.525 [2024-11-29 16:57:41.784017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.784049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.525 [2024-11-29 16:57:41.784066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.784094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.525 [2024-11-29 16:57:41.784124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.784150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.525 [2024-11-29 16:57:41.784165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.784191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:41.784206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.784232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:41.784247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.784273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:41.784288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.784315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:41.784330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.784374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:41.784410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.784437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:41.784453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.784481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:41.784508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.784552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:41.784571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.784599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:41.784615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:41.784641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:41.784657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.525 9787.80 IOPS, 38.23 MiB/s [2024-11-29T16:58:23.317Z] 9285.12 IOPS, 36.27 MiB/s [2024-11-29T16:58:23.317Z] 9317.59 IOPS, 36.40 MiB/s [2024-11-29T16:58:23.317Z] 9351.22 IOPS, 36.53 MiB/s [2024-11-29T16:58:23.317Z] 9378.63 IOPS, 36.64 MiB/s [2024-11-29T16:58:23.317Z] 9404.75 IOPS, 36.74 MiB/s [2024-11-29T16:58:23.317Z] 9430.57 IOPS, 36.84 MiB/s [2024-11-29T16:58:23.317Z] [2024-11-29 16:57:48.858800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:48.858858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.858924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:48.858944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.858965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:48.858980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.858999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:48.859013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.859032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:48.859045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.859064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:48.859078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.859097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:48.859111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.859129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:48.859143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.859182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:48.859197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.859217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:48.859230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.859266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:48.859280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.859299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:48.859313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.859348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.525 [2024-11-29 16:57:48.859363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.859397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.525 [2024-11-29 16:57:48.859413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.859434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.525 [2024-11-29 16:57:48.859449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:59.525 [2024-11-29 16:57:48.859469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.859484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.859504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.859518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.859556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.859571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.859591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.859605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.859625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.859639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.859669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.859685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.859704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.859719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.859749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.859763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.859783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.859834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.859855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.859870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.859891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.859905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.859926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.859940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.859960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.859975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.859996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.526 [2024-11-29 16:57:48.860360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.526 [2024-11-29 16:57:48.860425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.526 [2024-11-29 16:57:48.860460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.526 [2024-11-29 16:57:48.860501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.526 [2024-11-29 16:57:48.860536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.526 [2024-11-29 16:57:48.860570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.526 [2024-11-29 16:57:48.860604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.526 [2024-11-29 16:57:48.860646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.526 [2024-11-29 16:57:48.860682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.526 [2024-11-29 16:57:48.860717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.526 [2024-11-29 16:57:48.860765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.526 [2024-11-29 16:57:48.860969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.526 [2024-11-29 16:57:48.860983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.527 [2024-11-29 16:57:48.861395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.527 [2024-11-29 16:57:48.861432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.527 [2024-11-29 16:57:48.861467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.527 [2024-11-29 16:57:48.861502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.527 [2024-11-29 16:57:48.861536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.527 [2024-11-29 16:57:48.861581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.527 [2024-11-29 16:57:48.861615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.527 [2024-11-29 16:57:48.861650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.861960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.861975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.862001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.862016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.862036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.862051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.862070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.862091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.862112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.862126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.862146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.862161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.862180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.862194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.862214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.527 [2024-11-29 16:57:48.862229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:59.527 [2024-11-29 16:57:48.862252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.862268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.862302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.862336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.862386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.862420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.862462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.862498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.862531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.862565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.862599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.862633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.862669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.862704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.862738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.862772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.862805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.862839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.862879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.862915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.862949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.862968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.862982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.863002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.863017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.863037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.863051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.863070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.863085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.863104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.863119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.863138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.863153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.863172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.863186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.863206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.863224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.863244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.863258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.863278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.863293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.863319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.863347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.864044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.528 [2024-11-29 16:57:48.864071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.864105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.864137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.864164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.864194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.864220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.864234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.864260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.864274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.864305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.864321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.864346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.864361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.864399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.528 [2024-11-29 16:57:48.864418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:59.528 [2024-11-29 16:57:48.864458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:57:48.864477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:59.529 9397.55 IOPS, 36.71 MiB/s [2024-11-29T16:58:23.321Z] 8988.96 IOPS, 35.11 MiB/s [2024-11-29T16:58:23.321Z] 8614.42 IOPS, 33.65 MiB/s [2024-11-29T16:58:23.321Z] 8269.84 IOPS, 32.30 MiB/s [2024-11-29T16:58:23.321Z] 7951.77 IOPS, 31.06 MiB/s [2024-11-29T16:58:23.321Z] 7657.26 IOPS, 29.91 MiB/s [2024-11-29T16:58:23.321Z] 7383.79 IOPS, 28.84 MiB/s [2024-11-29T16:58:23.321Z] 7159.93 IOPS, 27.97 MiB/s [2024-11-29T16:58:23.321Z] 7241.27 IOPS, 28.29 MiB/s [2024-11-29T16:58:23.321Z] 7322.42 IOPS, 28.60 MiB/s [2024-11-29T16:58:23.321Z] 7398.69 IOPS, 28.90 MiB/s [2024-11-29T16:58:23.321Z] 7472.09 IOPS, 29.19 MiB/s [2024-11-29T16:58:23.321Z] 7537.59 IOPS, 29.44 MiB/s [2024-11-29T16:58:23.321Z] 7599.40 IOPS, 29.69 MiB/s [2024-11-29T16:58:23.321Z] [2024-11-29 16:58:02.237144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.237211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.237290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.237329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.237365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.237416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.237472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.237506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.237543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.237593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.237656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.237690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.237722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.237756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.237801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.237834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.237867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.237902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.237936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.237969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.237988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.238002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.238035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.238067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.238100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.238133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.238210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.238250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.238276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.238302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.238355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.238387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.238430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.238457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.238486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.238521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.529 [2024-11-29 16:58:02.238549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.529 [2024-11-29 16:58:02.238564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.529 [2024-11-29 16:58:02.238594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.238609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.238623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.238637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.238651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.238673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.238688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.238702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.238716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.238731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.238759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.238788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.238800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.238814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.238826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.238841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.238853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.238867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.238879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.238893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.238906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.238920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.238933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.238947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.238959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.238973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.238986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.530 [2024-11-29 16:58:02.239072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.530 [2024-11-29 16:58:02.239099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.530 [2024-11-29 16:58:02.239126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.530 [2024-11-29 16:58:02.239152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.530 [2024-11-29 16:58:02.239179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.530 [2024-11-29 16:58:02.239206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.530 [2024-11-29 16:58:02.239233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.530 [2024-11-29 16:58:02.239260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.530 [2024-11-29 16:58:02.239729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.530 [2024-11-29 16:58:02.239748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.239762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.239775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.239813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.239846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.239862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.239875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.239890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.239903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.239919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.239932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.239947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.239960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.239975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.239988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.531 [2024-11-29 16:58:02.240016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.531 [2024-11-29 16:58:02.240044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.531 [2024-11-29 16:58:02.240073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.531 [2024-11-29 16:58:02.240101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.531 [2024-11-29 16:58:02.240144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.531 [2024-11-29 16:58:02.240195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.531 [2024-11-29 16:58:02.240223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.531 [2024-11-29 16:58:02.240249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.240276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.240303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.240330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.240356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.240393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.240424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.240450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.240477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.240504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.240530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.240564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.531 [2024-11-29 16:58:02.240591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873840 is same with the state(6) to be set 00:22:59.531 [2024-11-29 16:58:02.240620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.531 [2024-11-29 16:58:02.240630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.531 [2024-11-29 16:58:02.240641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52176 len:8 PRP1 0x0 PRP2 0x0 00:22:59.531 [2024-11-29 16:58:02.240653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.531 [2024-11-29 16:58:02.240676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.531 [2024-11-29 16:58:02.240685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52184 len:8 PRP1 0x0 PRP2 0x0 00:22:59.531 [2024-11-29 16:58:02.240697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.531 [2024-11-29 16:58:02.240720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.531 [2024-11-29 16:58:02.240729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52192 len:8 PRP1 0x0 PRP2 0x0 00:22:59.531 [2024-11-29 16:58:02.240741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.531 [2024-11-29 16:58:02.240763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.531 [2024-11-29 16:58:02.240773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52200 len:8 PRP1 0x0 PRP2 0x0 00:22:59.531 [2024-11-29 16:58:02.240785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.531 [2024-11-29 16:58:02.240797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.531 [2024-11-29 16:58:02.240807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.531 [2024-11-29 16:58:02.240816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52464 len:8 PRP1 0x0 PRP2 0x0 00:22:59.531 [2024-11-29 16:58:02.240828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.240841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.240850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.240860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52472 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.240871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.240889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.240900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.240910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52480 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.240922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.240934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.240944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.240953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52488 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.240965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.240977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.240987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.240997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52496 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52504 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52512 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52520 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52528 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52536 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52544 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52552 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52560 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52568 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52576 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52584 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52592 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52600 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52608 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52616 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.241703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.532 [2024-11-29 16:58:02.241713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.532 [2024-11-29 16:58:02.241722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52624 len:8 PRP1 0x0 PRP2 0x0 00:22:59.532 [2024-11-29 16:58:02.241734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.242773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:59.532 [2024-11-29 16:58:02.242848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.532 [2024-11-29 16:58:02.242870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.532 [2024-11-29 16:58:02.242906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1864d50 (9): Bad file descriptor 00:22:59.532 [2024-11-29 16:58:02.243256] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.532 [2024-11-29 16:58:02.243290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1864d50 with addr=10.0.0.3, port=4421 00:22:59.532 [2024-11-29 16:58:02.243306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1864d50 is same with the state(6) to be set 00:22:59.532 [2024-11-29 16:58:02.243352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1864d50 (9): Bad file descriptor 00:22:59.532 [2024-11-29 16:58:02.243384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:59.532 [2024-11-29 16:58:02.243400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:59.532 [2024-11-29 16:58:02.243413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:59.532 [2024-11-29 16:58:02.243426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:59.532 [2024-11-29 16:58:02.243441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:59.533 7656.56 IOPS, 29.91 MiB/s [2024-11-29T16:58:23.325Z] 7706.05 IOPS, 30.10 MiB/s [2024-11-29T16:58:23.325Z] 7761.45 IOPS, 30.32 MiB/s [2024-11-29T16:58:23.325Z] 7815.36 IOPS, 30.53 MiB/s [2024-11-29T16:58:23.325Z] 7866.18 IOPS, 30.73 MiB/s [2024-11-29T16:58:23.325Z] 7914.51 IOPS, 30.92 MiB/s [2024-11-29T16:58:23.325Z] 7960.55 IOPS, 31.10 MiB/s [2024-11-29T16:58:23.325Z] 7998.67 IOPS, 31.24 MiB/s [2024-11-29T16:58:23.325Z] 8038.34 IOPS, 31.40 MiB/s [2024-11-29T16:58:23.325Z] 8077.71 IOPS, 31.55 MiB/s [2024-11-29T16:58:23.325Z] [2024-11-29 16:58:12.288659] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:59.533 8115.91 IOPS, 31.70 MiB/s [2024-11-29T16:58:23.325Z] 8152.94 IOPS, 31.85 MiB/s [2024-11-29T16:58:23.325Z] 8188.08 IOPS, 31.98 MiB/s [2024-11-29T16:58:23.325Z] 8222.61 IOPS, 32.12 MiB/s [2024-11-29T16:58:23.325Z] 8249.04 IOPS, 32.22 MiB/s [2024-11-29T16:58:23.325Z] 8279.76 IOPS, 32.34 MiB/s [2024-11-29T16:58:23.325Z] 8309.00 IOPS, 32.46 MiB/s [2024-11-29T16:58:23.325Z] 8336.53 IOPS, 32.56 MiB/s [2024-11-29T16:58:23.325Z] 8362.30 IOPS, 32.67 MiB/s [2024-11-29T16:58:23.325Z] 8389.45 IOPS, 32.77 MiB/s [2024-11-29T16:58:23.325Z] Received shutdown signal, test time was about 55.505969 seconds 00:22:59.533 00:22:59.533 Latency(us) 00:22:59.533 [2024-11-29T16:58:23.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.533 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:59.533 Verification LBA range: start 0x0 length 0x4000 00:22:59.533 Nvme0n1 : 55.51 8397.11 32.80 0.00 0.00 15214.23 718.66 7046430.72 00:22:59.533 [2024-11-29T16:58:23.325Z] =================================================================================================================== 00:22:59.533 [2024-11-29T16:58:23.325Z] Total : 8397.11 32.80 0.00 0.00 15214.23 718.66 7046430.72 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:59.533 rmmod nvme_tcp 00:22:59.533 rmmod nvme_fabrics 00:22:59.533 rmmod nvme_keyring 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 97303 ']' 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 97303 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 97303 ']' 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 97303 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.533 16:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97303 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:59.533 killing process with pid 97303 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97303' 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 97303 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 97303 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:59.533 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:59.792 00:22:59.792 real 1m0.451s 00:22:59.792 user 2m48.188s 00:22:59.792 sys 0m17.321s 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:59.792 ************************************ 00:22:59.792 END TEST nvmf_host_multipath 00:22:59.792 ************************************ 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.792 ************************************ 00:22:59.792 START TEST nvmf_timeout 00:22:59.792 ************************************ 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:59.792 * Looking for test storage... 00:22:59.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:22:59.792 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:00.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.052 --rc genhtml_branch_coverage=1 00:23:00.052 --rc genhtml_function_coverage=1 00:23:00.052 --rc genhtml_legend=1 00:23:00.052 --rc geninfo_all_blocks=1 00:23:00.052 --rc geninfo_unexecuted_blocks=1 00:23:00.052 00:23:00.052 ' 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:00.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.052 --rc genhtml_branch_coverage=1 00:23:00.052 --rc genhtml_function_coverage=1 00:23:00.052 --rc genhtml_legend=1 00:23:00.052 --rc geninfo_all_blocks=1 00:23:00.052 --rc geninfo_unexecuted_blocks=1 00:23:00.052 00:23:00.052 ' 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:00.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.052 --rc genhtml_branch_coverage=1 00:23:00.052 --rc genhtml_function_coverage=1 00:23:00.052 --rc genhtml_legend=1 00:23:00.052 --rc geninfo_all_blocks=1 00:23:00.052 --rc geninfo_unexecuted_blocks=1 00:23:00.052 00:23:00.052 ' 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:00.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.052 --rc genhtml_branch_coverage=1 00:23:00.052 --rc genhtml_function_coverage=1 00:23:00.052 --rc genhtml_legend=1 00:23:00.052 --rc geninfo_all_blocks=1 00:23:00.052 --rc geninfo_unexecuted_blocks=1 00:23:00.052 00:23:00.052 ' 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.052 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:00.053 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:00.053 Cannot find device "nvmf_init_br" 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:00.053 Cannot find device "nvmf_init_br2" 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:00.053 Cannot find device "nvmf_tgt_br" 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:00.053 Cannot find device "nvmf_tgt_br2" 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:00.053 Cannot find device "nvmf_init_br" 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:00.053 Cannot find device "nvmf_init_br2" 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:00.053 Cannot find device "nvmf_tgt_br" 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:00.053 Cannot find device "nvmf_tgt_br2" 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:00.053 Cannot find device "nvmf_br" 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:00.053 Cannot find device "nvmf_init_if" 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:00.053 Cannot find device "nvmf_init_if2" 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:00.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:00.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:00.053 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:00.313 16:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:00.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:00.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:23:00.313 00:23:00.313 --- 10.0.0.3 ping statistics --- 00:23:00.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.313 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:00.313 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:00.313 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:23:00.313 00:23:00.313 --- 10.0.0.4 ping statistics --- 00:23:00.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.313 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:00.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:00.313 00:23:00.313 --- 10.0.0.1 ping statistics --- 00:23:00.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.313 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:00.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:23:00.313 00:23:00.313 --- 10.0.0.2 ping statistics --- 00:23:00.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.313 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=98506 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 98506 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98506 ']' 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.313 16:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:00.572 [2024-11-29 16:58:24.132824] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:00.572 [2024-11-29 16:58:24.132923] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.572 [2024-11-29 16:58:24.260398] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:00.572 [2024-11-29 16:58:24.285419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:00.572 [2024-11-29 16:58:24.303471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.572 [2024-11-29 16:58:24.303533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.572 [2024-11-29 16:58:24.303542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.572 [2024-11-29 16:58:24.303549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.572 [2024-11-29 16:58:24.303555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.572 [2024-11-29 16:58:24.304295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.572 [2024-11-29 16:58:24.304555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.572 [2024-11-29 16:58:24.331621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:01.517 16:58:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.517 16:58:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:01.517 16:58:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:01.517 16:58:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:01.517 16:58:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:01.517 16:58:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.517 16:58:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:01.517 16:58:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:01.778 [2024-11-29 16:58:25.391878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.778 16:58:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:02.037 Malloc0 00:23:02.037 16:58:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.295 16:58:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:02.553 16:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:02.813 [2024-11-29 16:58:26.372841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:02.813 16:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=98561 00:23:02.813 16:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:02.813 16:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 98561 /var/tmp/bdevperf.sock 00:23:02.813 16:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98561 ']' 00:23:02.813 16:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.813 16:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.813 16:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.813 16:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.813 16:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:02.813 [2024-11-29 16:58:26.439240] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:02.813 [2024-11-29 16:58:26.439368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98561 ] 00:23:02.813 [2024-11-29 16:58:26.559734] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:02.813 [2024-11-29 16:58:26.578804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.813 [2024-11-29 16:58:26.598281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.072 [2024-11-29 16:58:26.627459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:03.072 16:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.072 16:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:03.072 16:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:03.331 16:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:03.589 NVMe0n1 00:23:03.589 16:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=98577 00:23:03.589 16:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:03.589 16:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:23:03.589 Running I/O for 10 seconds... 00:23:04.525 16:58:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:04.787 7701.00 IOPS, 30.08 MiB/s [2024-11-29T16:58:28.579Z] [2024-11-29 16:58:28.505718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.505993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.787 [2024-11-29 16:58:28.506166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c2ce0 is same with the state(6) to be set 00:23:04.788 [2024-11-29 16:58:28.506670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.788 [2024-11-29 16:58:28.506699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.788 [2024-11-29 16:58:28.506718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.788 [2024-11-29 16:58:28.506728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.788 [2024-11-29 16:58:28.506739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.788 [2024-11-29 16:58:28.506747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.788 [2024-11-29 16:58:28.506758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.788 [2024-11-29 16:58:28.506766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.788 [2024-11-29 16:58:28.506776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.788 [2024-11-29 16:58:28.506784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.788 [2024-11-29 16:58:28.506794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.788 [2024-11-29 16:58:28.506803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.788 [2024-11-29 16:58:28.506813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.788 [2024-11-29 16:58:28.506821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.788 [2024-11-29 16:58:28.506831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.788 [2024-11-29 16:58:28.506839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.788 [2024-11-29 16:58:28.506849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.788 [2024-11-29 16:58:28.506858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.788 [2024-11-29 16:58:28.506868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.788 [2024-11-29 16:58:28.506876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.788 [2024-11-29 16:58:28.506886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.788 [2024-11-29 16:58:28.506894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.788 [2024-11-29 16:58:28.506904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.788 [2024-11-29 16:58:28.506912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.506922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.506931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.506941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.506949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.506959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.506967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.506977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.506986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.506996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.789 [2024-11-29 16:58:28.507655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.789 [2024-11-29 16:58:28.507665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.507683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.507701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.507719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.507737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.507755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.507774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.507792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.507837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.507856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.507875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.507895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.507920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.507939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.507983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.507991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.790 [2024-11-29 16:58:28.508477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.790 [2024-11-29 16:58:28.508485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.791 [2024-11-29 16:58:28.508866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.791 [2024-11-29 16:58:28.508886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.791 [2024-11-29 16:58:28.508906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.791 [2024-11-29 16:58:28.508925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.791 [2024-11-29 16:58:28.508943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.791 [2024-11-29 16:58:28.508961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.791 [2024-11-29 16:58:28.508980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.508990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.791 [2024-11-29 16:58:28.508998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.509008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.791 [2024-11-29 16:58:28.509017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.509026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.791 [2024-11-29 16:58:28.509035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.509045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.791 [2024-11-29 16:58:28.509053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.509063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.791 [2024-11-29 16:58:28.509072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.509082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.791 [2024-11-29 16:58:28.509090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.509100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.791 [2024-11-29 16:58:28.509108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.791 [2024-11-29 16:58:28.509118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.791 [2024-11-29 16:58:28.509127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.792 [2024-11-29 16:58:28.509137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-11-29 16:58:28.509145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.792 [2024-11-29 16:58:28.509155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.792 [2024-11-29 16:58:28.509165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.792 [2024-11-29 16:58:28.509175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x84d8f0 is same with the state(6) to be set 00:23:04.792 [2024-11-29 16:58:28.509185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.792 [2024-11-29 16:58:28.509192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.792 [2024-11-29 16:58:28.509201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74912 len:8 PRP1 0x0 PRP2 0x0 00:23:04.792 [2024-11-29 16:58:28.509209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.792 [2024-11-29 16:58:28.509494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:04.792 [2024-11-29 16:58:28.509568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82c970 (9): Bad file descriptor 00:23:04.792 [2024-11-29 16:58:28.509663] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.792 [2024-11-29 16:58:28.509684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x82c970 with addr=10.0.0.3, port=4420 00:23:04.792 [2024-11-29 16:58:28.509708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82c970 is same with the state(6) to be set 00:23:04.792 [2024-11-29 16:58:28.509725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82c970 (9): Bad file descriptor 00:23:04.792 [2024-11-29 16:58:28.509739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:04.792 [2024-11-29 16:58:28.509748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:04.792 [2024-11-29 16:58:28.509758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:04.792 [2024-11-29 16:58:28.509767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:04.792 [2024-11-29 16:58:28.509776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:04.792 16:58:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:23:06.663 4626.00 IOPS, 18.07 MiB/s [2024-11-29T16:58:30.714Z] 3084.00 IOPS, 12.05 MiB/s [2024-11-29T16:58:30.714Z] [2024-11-29 16:58:30.509923] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.922 [2024-11-29 16:58:30.509984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x82c970 with addr=10.0.0.3, port=4420 00:23:06.922 [2024-11-29 16:58:30.509999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82c970 is same with the state(6) to be set 00:23:06.922 [2024-11-29 16:58:30.510021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82c970 (9): Bad file descriptor 00:23:06.922 [2024-11-29 16:58:30.510049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:06.922 [2024-11-29 16:58:30.510061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:06.922 [2024-11-29 16:58:30.510071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:06.922 [2024-11-29 16:58:30.510081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:06.922 [2024-11-29 16:58:30.510091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:06.922 16:58:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:23:06.922 16:58:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:06.922 16:58:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:07.181 16:58:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:23:07.181 16:58:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:23:07.181 16:58:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:07.181 16:58:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:07.439 16:58:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:23:07.439 16:58:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:23:08.633 2313.00 IOPS, 9.04 MiB/s [2024-11-29T16:58:32.684Z] 1850.40 IOPS, 7.23 MiB/s [2024-11-29T16:58:32.684Z] [2024-11-29 16:58:32.510271] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.892 [2024-11-29 16:58:32.510512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x82c970 with addr=10.0.0.3, port=4420 00:23:08.892 [2024-11-29 16:58:32.510538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82c970 is same with the state(6) to be set 00:23:08.892 [2024-11-29 16:58:32.510565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82c970 (9): Bad file descriptor 00:23:08.892 [2024-11-29 16:58:32.510585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:08.892 [2024-11-29 16:58:32.510595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:08.892 [2024-11-29 16:58:32.510605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:08.892 [2024-11-29 16:58:32.510616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:08.892 [2024-11-29 16:58:32.510628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:10.763 1542.00 IOPS, 6.02 MiB/s [2024-11-29T16:58:34.555Z] 1321.71 IOPS, 5.16 MiB/s [2024-11-29T16:58:34.555Z] [2024-11-29 16:58:34.510654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:10.763 [2024-11-29 16:58:34.510693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:10.763 [2024-11-29 16:58:34.510720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:10.763 [2024-11-29 16:58:34.510728] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:23:10.763 [2024-11-29 16:58:34.510739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:11.956 1156.50 IOPS, 4.52 MiB/s 00:23:11.956 Latency(us) 00:23:11.956 [2024-11-29T16:58:35.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.956 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:11.956 Verification LBA range: start 0x0 length 0x4000 00:23:11.956 NVMe0n1 : 8.20 1127.71 4.41 15.60 0.00 111753.23 3425.75 7015926.69 00:23:11.956 [2024-11-29T16:58:35.748Z] =================================================================================================================== 00:23:11.956 [2024-11-29T16:58:35.748Z] Total : 1127.71 4.41 15.60 0.00 111753.23 3425.75 7015926.69 00:23:11.956 { 00:23:11.956 "results": [ 00:23:11.956 { 00:23:11.956 "job": "NVMe0n1", 00:23:11.956 "core_mask": "0x4", 00:23:11.956 "workload": "verify", 00:23:11.956 "status": "finished", 00:23:11.956 "verify_range": { 00:23:11.956 "start": 0, 00:23:11.956 "length": 16384 00:23:11.956 }, 00:23:11.956 "queue_depth": 128, 00:23:11.956 "io_size": 4096, 00:23:11.956 "runtime": 8.204221, 00:23:11.956 "iops": 1127.7121862026877, 00:23:11.956 "mibps": 4.405125727354249, 00:23:11.956 "io_failed": 128, 00:23:11.956 "io_timeout": 0, 00:23:11.956 "avg_latency_us": 111753.22513045164, 00:23:11.956 "min_latency_us": 3425.7454545454543, 00:23:11.956 "max_latency_us": 7015926.69090909 00:23:11.956 } 00:23:11.956 ], 00:23:11.956 "core_count": 1 00:23:11.956 } 00:23:12.523 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:23:12.523 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.523 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:12.782 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:12.782 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:23:12.782 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:12.782 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:13.041 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:13.041 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 98577 00:23:13.041 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 98561 00:23:13.041 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98561 ']' 00:23:13.041 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98561 00:23:13.041 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:13.041 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.041 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98561 00:23:13.041 killing process with pid 98561 00:23:13.041 Received shutdown signal, test time was about 9.433012 seconds 00:23:13.041 00:23:13.041 Latency(us) 00:23:13.041 [2024-11-29T16:58:36.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.041 [2024-11-29T16:58:36.833Z] =================================================================================================================== 00:23:13.041 [2024-11-29T16:58:36.833Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.041 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:13.041 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:13.041 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98561' 00:23:13.041 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98561 00:23:13.041 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98561 00:23:13.300 16:58:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:13.300 [2024-11-29 16:58:37.052201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:13.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.300 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=98694 00:23:13.300 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:13.300 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 98694 /var/tmp/bdevperf.sock 00:23:13.300 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98694 ']' 00:23:13.300 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.300 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.300 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.300 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.300 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:13.559 [2024-11-29 16:58:37.118289] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:13.560 [2024-11-29 16:58:37.118838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98694 ] 00:23:13.560 [2024-11-29 16:58:37.239007] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:13.560 [2024-11-29 16:58:37.265560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.560 [2024-11-29 16:58:37.285652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.560 [2024-11-29 16:58:37.314769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:13.844 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.844 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:13.844 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:13.844 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:14.150 NVMe0n1 00:23:14.150 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=98709 00:23:14.150 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:14.150 16:58:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:23:14.411 Running I/O for 10 seconds... 00:23:15.346 16:58:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:15.346 7829.00 IOPS, 30.58 MiB/s [2024-11-29T16:58:39.138Z] [2024-11-29 16:58:39.126049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.346 [2024-11-29 16:58:39.126096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.346 [2024-11-29 16:58:39.126852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.346 [2024-11-29 16:58:39.126862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.126870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.126879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.126887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.126897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.126904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.126914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.126922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.126931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.126939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.126950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.126958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.126968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.126976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.126986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.126993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.347 [2024-11-29 16:58:39.127534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.347 [2024-11-29 16:58:39.127544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.127987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.127997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.128005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.128015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.128022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.128032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.128040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.128050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.128057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.128067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.128075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.128084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.128092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.128104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.128112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.128122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.128130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.128140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.128148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.128157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.348 [2024-11-29 16:58:39.128165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.348 [2024-11-29 16:58:39.128175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.128183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.128193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.128201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.128211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.128219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.128229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.128237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.128247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.128254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.128264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.128272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.128281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.128289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.128299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.128307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.128317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.128771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.128851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.129234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.129452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.129624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.129702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.129858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.129914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.129965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.130158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.130306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.130477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.349 [2024-11-29 16:58:39.130590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.130748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.349 [2024-11-29 16:58:39.130874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.131064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18268f0 is same with the state(6) to be set 00:23:15.349 [2024-11-29 16:58:39.131193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.349 [2024-11-29 16:58:39.131209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.349 [2024-11-29 16:58:39.131218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70816 len:8 PRP1 0x0 PRP2 0x0 00:23:15.349 [2024-11-29 16:58:39.131227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.131393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.349 [2024-11-29 16:58:39.131412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.131424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.349 [2024-11-29 16:58:39.131433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.131443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.349 [2024-11-29 16:58:39.131452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.131462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.349 [2024-11-29 16:58:39.131471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.349 [2024-11-29 16:58:39.131479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805970 is same with the state(6) to be set 00:23:15.349 [2024-11-29 16:58:39.131730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:15.349 [2024-11-29 16:58:39.131772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1805970 (9): Bad file descriptor 00:23:15.349 [2024-11-29 16:58:39.131903] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:15.349 [2024-11-29 16:58:39.131927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1805970 with addr=10.0.0.3, port=4420 00:23:15.349 [2024-11-29 16:58:39.131938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805970 is same with the state(6) to be set 00:23:15.349 [2024-11-29 16:58:39.131957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1805970 (9): Bad file descriptor 00:23:15.349 [2024-11-29 16:58:39.131972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:15.349 [2024-11-29 16:58:39.131982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:15.349 [2024-11-29 16:58:39.131991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:15.349 [2024-11-29 16:58:39.132001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:15.349 [2024-11-29 16:58:39.132013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:15.607 16:58:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:16.542 4362.50 IOPS, 17.04 MiB/s [2024-11-29T16:58:40.334Z] [2024-11-29 16:58:40.132110] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.542 [2024-11-29 16:58:40.132365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1805970 with addr=10.0.0.3, port=4420 00:23:16.542 [2024-11-29 16:58:40.132510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805970 is same with the state(6) to be set 00:23:16.542 [2024-11-29 16:58:40.132694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1805970 (9): Bad file descriptor 00:23:16.542 [2024-11-29 16:58:40.132851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:16.542 [2024-11-29 16:58:40.132917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:16.542 [2024-11-29 16:58:40.133048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:16.542 [2024-11-29 16:58:40.133087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:16.542 [2024-11-29 16:58:40.133212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:16.542 16:58:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:16.801 [2024-11-29 16:58:40.423633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:16.801 16:58:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 98709 00:23:17.369 2908.33 IOPS, 11.36 MiB/s [2024-11-29T16:58:41.161Z] [2024-11-29 16:58:41.151449] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:19.242 2181.25 IOPS, 8.52 MiB/s [2024-11-29T16:58:44.412Z] 3585.40 IOPS, 14.01 MiB/s [2024-11-29T16:58:45.349Z] 4777.17 IOPS, 18.66 MiB/s [2024-11-29T16:58:46.284Z] 5644.86 IOPS, 22.05 MiB/s [2024-11-29T16:58:47.222Z] 6283.25 IOPS, 24.54 MiB/s [2024-11-29T16:58:48.159Z] 6778.00 IOPS, 26.48 MiB/s [2024-11-29T16:58:48.159Z] 7174.40 IOPS, 28.02 MiB/s 00:23:24.367 Latency(us) 00:23:24.367 [2024-11-29T16:58:48.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.367 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:24.367 Verification LBA range: start 0x0 length 0x4000 00:23:24.367 NVMe0n1 : 10.01 7173.66 28.02 0.00 0.00 17801.14 2442.71 3019898.88 00:23:24.367 [2024-11-29T16:58:48.159Z] =================================================================================================================== 00:23:24.367 [2024-11-29T16:58:48.159Z] Total : 7173.66 28.02 0.00 0.00 17801.14 2442.71 3019898.88 00:23:24.367 { 00:23:24.367 "results": [ 00:23:24.367 { 00:23:24.367 "job": "NVMe0n1", 00:23:24.367 "core_mask": "0x4", 00:23:24.367 "workload": "verify", 00:23:24.367 "status": "finished", 00:23:24.367 "verify_range": { 00:23:24.367 "start": 0, 00:23:24.367 "length": 16384 00:23:24.367 }, 00:23:24.367 "queue_depth": 128, 00:23:24.367 "io_size": 4096, 00:23:24.367 "runtime": 10.008704, 00:23:24.367 "iops": 7173.656049774277, 00:23:24.367 "mibps": 28.022093944430768, 00:23:24.367 "io_failed": 0, 00:23:24.367 "io_timeout": 0, 00:23:24.367 "avg_latency_us": 17801.14207597219, 00:23:24.367 "min_latency_us": 2442.7054545454544, 00:23:24.367 "max_latency_us": 3019898.88 00:23:24.367 } 00:23:24.367 ], 00:23:24.367 "core_count": 1 00:23:24.367 } 00:23:24.367 16:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=98815 00:23:24.367 16:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:24.367 16:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:24.627 Running I/O for 10 seconds... 00:23:25.569 16:58:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:25.569 7588.00 IOPS, 29.64 MiB/s [2024-11-29T16:58:49.361Z] [2024-11-29 16:58:49.296989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.569 [2024-11-29 16:58:49.297541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3780 is same with the state(6) to be set 00:23:25.570 [2024-11-29 16:58:49.297990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.570 [2024-11-29 16:58:49.298422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.570 [2024-11-29 16:58:49.298431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.298989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.298998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.571 [2024-11-29 16:58:49.299290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.571 [2024-11-29 16:58:49.299300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.299984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.299993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.300005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.300014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.300024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.300033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.300044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.300053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.300065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.300074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.300085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.300094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.300106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.572 [2024-11-29 16:58:49.300115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.572 [2024-11-29 16:58:49.300126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.573 [2024-11-29 16:58:49.300135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.573 [2024-11-29 16:58:49.300183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.573 [2024-11-29 16:58:49.300205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.573 [2024-11-29 16:58:49.300224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.573 [2024-11-29 16:58:49.300243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.573 [2024-11-29 16:58:49.300262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.573 [2024-11-29 16:58:49.300281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.573 [2024-11-29 16:58:49.300300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.573 [2024-11-29 16:58:49.300319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.573 [2024-11-29 16:58:49.300338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.573 [2024-11-29 16:58:49.300357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.573 [2024-11-29 16:58:49.300386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.573 [2024-11-29 16:58:49.300406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.573 [2024-11-29 16:58:49.300715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.573 [2024-11-29 16:58:49.300735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824a00 is same with the state(6) to be set 00:23:25.573 [2024-11-29 16:58:49.300757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:25.573 [2024-11-29 16:58:49.300765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:25.573 [2024-11-29 16:58:49.300776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69792 len:8 PRP1 0x0 PRP2 0x0 00:23:25.573 [2024-11-29 16:58:49.300785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.573 [2024-11-29 16:58:49.300918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.573 [2024-11-29 16:58:49.300937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.573 [2024-11-29 16:58:49.300946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.573 [2024-11-29 16:58:49.300955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.574 [2024-11-29 16:58:49.300967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.574 [2024-11-29 16:58:49.300975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.574 [2024-11-29 16:58:49.300984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805970 is same with the state(6) to be set 00:23:25.574 [2024-11-29 16:58:49.301193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:25.574 [2024-11-29 16:58:49.301222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1805970 (9): Bad file descriptor 00:23:25.574 [2024-11-29 16:58:49.301319] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.574 [2024-11-29 16:58:49.301359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1805970 with addr=10.0.0.3, port=4420 00:23:25.574 [2024-11-29 16:58:49.301383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805970 is same with the state(6) to be set 00:23:25.574 [2024-11-29 16:58:49.301405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1805970 (9): Bad file descriptor 00:23:25.574 [2024-11-29 16:58:49.301421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:25.574 [2024-11-29 16:58:49.301431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:25.574 [2024-11-29 16:58:49.301442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:25.574 [2024-11-29 16:58:49.301452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:25.574 [2024-11-29 16:58:49.301462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:25.574 16:58:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:26.771 4306.00 IOPS, 16.82 MiB/s [2024-11-29T16:58:50.563Z] [2024-11-29 16:58:50.317031] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.771 [2024-11-29 16:58:50.317257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1805970 with addr=10.0.0.3, port=4420 00:23:26.771 [2024-11-29 16:58:50.317415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805970 is same with the state(6) to be set 00:23:26.771 [2024-11-29 16:58:50.317675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1805970 (9): Bad file descriptor 00:23:26.771 [2024-11-29 16:58:50.317716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:26.771 [2024-11-29 16:58:50.317729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:26.771 [2024-11-29 16:58:50.317739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:26.771 [2024-11-29 16:58:50.317750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:26.771 [2024-11-29 16:58:50.317761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:27.718 2870.67 IOPS, 11.21 MiB/s [2024-11-29T16:58:51.510Z] [2024-11-29 16:58:51.317842] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:27.718 [2024-11-29 16:58:51.318055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1805970 with addr=10.0.0.3, port=4420 00:23:27.718 [2024-11-29 16:58:51.318206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805970 is same with the state(6) to be set 00:23:27.718 [2024-11-29 16:58:51.318359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1805970 (9): Bad file descriptor 00:23:27.718 [2024-11-29 16:58:51.318403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:27.718 [2024-11-29 16:58:51.318417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:27.718 [2024-11-29 16:58:51.318429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:27.718 [2024-11-29 16:58:51.318439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:27.718 [2024-11-29 16:58:51.318450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:28.656 2153.00 IOPS, 8.41 MiB/s [2024-11-29T16:58:52.448Z] [2024-11-29 16:58:52.318881] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.656 [2024-11-29 16:58:52.318939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1805970 with addr=10.0.0.3, port=4420 00:23:28.656 [2024-11-29 16:58:52.318967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805970 is same with the state(6) to be set 00:23:28.656 [2024-11-29 16:58:52.319185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1805970 (9): Bad file descriptor 00:23:28.656 [2024-11-29 16:58:52.319437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:28.656 [2024-11-29 16:58:52.319451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:28.656 [2024-11-29 16:58:52.319460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:28.656 [2024-11-29 16:58:52.319484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:28.656 [2024-11-29 16:58:52.319494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:28.656 16:58:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:28.915 [2024-11-29 16:58:52.583244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:28.915 16:58:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 98815 00:23:29.743 1722.40 IOPS, 6.73 MiB/s [2024-11-29T16:58:53.535Z] [2024-11-29 16:58:53.347696] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:23:31.617 2840.50 IOPS, 11.10 MiB/s [2024-11-29T16:58:56.343Z] 3936.43 IOPS, 15.38 MiB/s [2024-11-29T16:58:57.277Z] 4756.38 IOPS, 18.58 MiB/s [2024-11-29T16:58:58.214Z] 5407.44 IOPS, 21.12 MiB/s [2024-11-29T16:58:58.214Z] 5917.10 IOPS, 23.11 MiB/s 00:23:34.422 Latency(us) 00:23:34.422 [2024-11-29T16:58:58.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.422 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:34.422 Verification LBA range: start 0x0 length 0x4000 00:23:34.422 NVMe0n1 : 10.01 5922.79 23.14 4195.10 0.00 12619.55 673.98 3019898.88 00:23:34.422 [2024-11-29T16:58:58.214Z] =================================================================================================================== 00:23:34.422 [2024-11-29T16:58:58.214Z] Total : 5922.79 23.14 4195.10 0.00 12619.55 0.00 3019898.88 00:23:34.422 { 00:23:34.422 "results": [ 00:23:34.422 { 00:23:34.422 "job": "NVMe0n1", 00:23:34.422 "core_mask": "0x4", 00:23:34.422 "workload": "verify", 00:23:34.422 "status": "finished", 00:23:34.422 "verify_range": { 00:23:34.422 "start": 0, 00:23:34.422 "length": 16384 00:23:34.422 }, 00:23:34.422 "queue_depth": 128, 00:23:34.422 "io_size": 4096, 00:23:34.422 "runtime": 10.009299, 00:23:34.422 "iops": 5922.792395351563, 00:23:34.422 "mibps": 23.135907794342042, 00:23:34.422 "io_failed": 41990, 00:23:34.422 "io_timeout": 0, 00:23:34.422 "avg_latency_us": 12619.550930186006, 00:23:34.422 "min_latency_us": 673.9781818181818, 00:23:34.422 "max_latency_us": 3019898.88 00:23:34.422 } 00:23:34.422 ], 00:23:34.422 "core_count": 1 00:23:34.422 } 00:23:34.422 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 98694 00:23:34.422 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98694 ']' 00:23:34.422 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98694 00:23:34.422 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:34.422 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.422 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98694 00:23:34.681 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:34.681 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:34.681 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98694' 00:23:34.681 killing process with pid 98694 00:23:34.681 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98694 00:23:34.681 Received shutdown signal, test time was about 10.000000 seconds 00:23:34.681 00:23:34.681 Latency(us) 00:23:34.681 [2024-11-29T16:58:58.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.681 [2024-11-29T16:58:58.473Z] =================================================================================================================== 00:23:34.681 [2024-11-29T16:58:58.473Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:34.681 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98694 00:23:34.681 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=98924 00:23:34.681 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:34.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.681 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 98924 /var/tmp/bdevperf.sock 00:23:34.681 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98924 ']' 00:23:34.681 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.681 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.681 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.681 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.681 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:34.681 [2024-11-29 16:58:58.417467] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:34.681 [2024-11-29 16:58:58.417772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98924 ] 00:23:34.940 [2024-11-29 16:58:58.546748] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:34.940 [2024-11-29 16:58:58.565865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.940 [2024-11-29 16:58:58.585981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.940 [2024-11-29 16:58:58.615065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:34.940 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.940 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:34.940 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=98932 00:23:34.940 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98924 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:34.940 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:35.199 16:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:35.457 NVMe0n1 00:23:35.715 16:58:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=98974 00:23:35.715 16:58:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:35.715 16:58:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:35.715 Running I/O for 10 seconds... 00:23:36.650 16:59:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:36.910 16891.00 IOPS, 65.98 MiB/s [2024-11-29T16:59:00.702Z] [2024-11-29 16:59:00.531636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.531923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.910 [2024-11-29 16:59:00.532684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.911 [2024-11-29 16:59:00.532691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.911 [2024-11-29 16:59:00.532698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.911 [2024-11-29 16:59:00.532706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.911 [2024-11-29 16:59:00.532714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.911 [2024-11-29 16:59:00.532721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.911 [2024-11-29 16:59:00.532728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bfde0 is same with the state(6) to be set 00:23:36.911 [2024-11-29 16:59:00.532783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.532820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.532841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.532850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.532861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.532870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.532880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.532888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.532898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.532907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.532917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.532925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.532935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.532943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.532953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.532961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.532971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.532979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.532989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.532997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.911 [2024-11-29 16:59:00.533495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.911 [2024-11-29 16:59:00.533505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.533989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.533999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.534007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.534018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.534028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.534038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.534046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.534056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.534064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.534074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.534082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.534092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.534100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.534110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.534118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.534128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.534136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.534146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.534154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.534163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.534171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.534181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.534189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.534200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.534208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.534217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.912 [2024-11-29 16:59:00.534226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.912 [2024-11-29 16:59:00.534236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.534980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.534988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.535000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.535009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.535020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.913 [2024-11-29 16:59:00.535028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.913 [2024-11-29 16:59:00.535039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.914 [2024-11-29 16:59:00.535047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.914 [2024-11-29 16:59:00.535057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.914 [2024-11-29 16:59:00.535070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.914 [2024-11-29 16:59:00.535080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.914 [2024-11-29 16:59:00.535088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.914 [2024-11-29 16:59:00.535099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.914 [2024-11-29 16:59:00.535107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.914 [2024-11-29 16:59:00.535117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.914 [2024-11-29 16:59:00.535126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.914 [2024-11-29 16:59:00.535136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.914 [2024-11-29 16:59:00.535144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.914 [2024-11-29 16:59:00.535155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.914 [2024-11-29 16:59:00.535163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.914 [2024-11-29 16:59:00.535173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.914 [2024-11-29 16:59:00.535182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.914 [2024-11-29 16:59:00.535192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.914 [2024-11-29 16:59:00.535200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.914 [2024-11-29 16:59:00.535211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.914 [2024-11-29 16:59:00.535219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.914 [2024-11-29 16:59:00.535230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.914 [2024-11-29 16:59:00.535238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.914 [2024-11-29 16:59:00.535248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24818f0 is same with the state(6) to be set 00:23:36.914 [2024-11-29 16:59:00.535259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:36.914 [2024-11-29 16:59:00.535267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:36.914 [2024-11-29 16:59:00.535277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129480 len:8 PRP1 0x0 PRP2 0x0 00:23:36.914 [2024-11-29 16:59:00.535285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.914 [2024-11-29 16:59:00.535620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:36.914 [2024-11-29 16:59:00.535719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2460970 (9): Bad file descriptor 00:23:36.914 [2024-11-29 16:59:00.535860] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.914 [2024-11-29 16:59:00.535884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2460970 with addr=10.0.0.3, port=4420 00:23:36.914 [2024-11-29 16:59:00.535896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460970 is same with the state(6) to be set 00:23:36.914 [2024-11-29 16:59:00.535915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2460970 (9): Bad file descriptor 00:23:36.914 [2024-11-29 16:59:00.535931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:36.914 [2024-11-29 16:59:00.535941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:36.914 [2024-11-29 16:59:00.535951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:36.914 [2024-11-29 16:59:00.535962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:36.914 [2024-11-29 16:59:00.535973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:36.914 16:59:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 98974 00:23:38.783 9748.50 IOPS, 38.08 MiB/s [2024-11-29T16:59:02.575Z] 6499.00 IOPS, 25.39 MiB/s [2024-11-29T16:59:02.575Z] [2024-11-29 16:59:02.536116] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.783 [2024-11-29 16:59:02.536205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2460970 with addr=10.0.0.3, port=4420 00:23:38.783 [2024-11-29 16:59:02.536220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460970 is same with the state(6) to be set 00:23:38.783 [2024-11-29 16:59:02.536242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2460970 (9): Bad file descriptor 00:23:38.783 [2024-11-29 16:59:02.536260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:38.783 [2024-11-29 16:59:02.536269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:38.783 [2024-11-29 16:59:02.536280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:38.783 [2024-11-29 16:59:02.536290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:38.783 [2024-11-29 16:59:02.536299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:40.655 4874.25 IOPS, 19.04 MiB/s [2024-11-29T16:59:04.706Z] 3899.40 IOPS, 15.23 MiB/s [2024-11-29T16:59:04.706Z] [2024-11-29 16:59:04.536424] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.914 [2024-11-29 16:59:04.536488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2460970 with addr=10.0.0.3, port=4420 00:23:40.914 [2024-11-29 16:59:04.536503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2460970 is same with the state(6) to be set 00:23:40.914 [2024-11-29 16:59:04.536527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2460970 (9): Bad file descriptor 00:23:40.914 [2024-11-29 16:59:04.536544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:40.914 [2024-11-29 16:59:04.536554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:40.914 [2024-11-29 16:59:04.536564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:40.914 [2024-11-29 16:59:04.536575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:40.914 [2024-11-29 16:59:04.536585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:42.784 3249.50 IOPS, 12.69 MiB/s [2024-11-29T16:59:06.576Z] 2785.29 IOPS, 10.88 MiB/s [2024-11-29T16:59:06.576Z] [2024-11-29 16:59:06.536673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:42.784 [2024-11-29 16:59:06.536710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:42.784 [2024-11-29 16:59:06.536736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:42.784 [2024-11-29 16:59:06.536745] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:23:42.784 [2024-11-29 16:59:06.536755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:44.096 2437.12 IOPS, 9.52 MiB/s 00:23:44.096 Latency(us) 00:23:44.096 [2024-11-29T16:59:07.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.096 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:44.096 NVMe0n1 : 8.16 2389.78 9.34 15.69 0.00 53155.32 7089.80 7015926.69 00:23:44.096 [2024-11-29T16:59:07.888Z] =================================================================================================================== 00:23:44.096 [2024-11-29T16:59:07.888Z] Total : 2389.78 9.34 15.69 0.00 53155.32 7089.80 7015926.69 00:23:44.096 { 00:23:44.096 "results": [ 00:23:44.096 { 00:23:44.096 "job": "NVMe0n1", 00:23:44.096 "core_mask": "0x4", 00:23:44.096 "workload": "randread", 00:23:44.096 "status": "finished", 00:23:44.096 "queue_depth": 128, 00:23:44.096 "io_size": 4096, 00:23:44.096 "runtime": 8.158495, 00:23:44.096 "iops": 2389.7789972292685, 00:23:44.096 "mibps": 9.33507420792683, 00:23:44.096 "io_failed": 128, 00:23:44.096 "io_timeout": 0, 00:23:44.096 "avg_latency_us": 53155.32443052693, 00:23:44.096 "min_latency_us": 7089.8036363636365, 00:23:44.096 "max_latency_us": 7015926.69090909 00:23:44.096 } 00:23:44.096 ], 00:23:44.096 "core_count": 1 00:23:44.096 } 00:23:44.096 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:44.096 Attaching 5 probes... 00:23:44.096 1346.790644: reset bdev controller NVMe0 00:23:44.096 1346.943314: reconnect bdev controller NVMe0 00:23:44.096 3347.180364: reconnect delay bdev controller NVMe0 00:23:44.096 3347.215450: reconnect bdev controller NVMe0 00:23:44.096 5347.509049: reconnect delay bdev controller NVMe0 00:23:44.096 5347.533391: reconnect bdev controller NVMe0 00:23:44.096 7347.820676: reconnect delay bdev controller NVMe0 00:23:44.096 7347.836409: reconnect bdev controller NVMe0 00:23:44.096 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:44.096 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:44.096 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 98932 00:23:44.096 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:44.096 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 98924 00:23:44.096 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98924 ']' 00:23:44.096 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98924 00:23:44.096 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:44.096 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.096 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98924 00:23:44.096 killing process with pid 98924 00:23:44.096 Received shutdown signal, test time was about 8.224709 seconds 00:23:44.096 00:23:44.097 Latency(us) 00:23:44.097 [2024-11-29T16:59:07.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.097 [2024-11-29T16:59:07.889Z] =================================================================================================================== 00:23:44.097 [2024-11-29T16:59:07.889Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.097 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:44.097 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:44.097 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98924' 00:23:44.097 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98924 00:23:44.097 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98924 00:23:44.097 16:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:44.356 rmmod nvme_tcp 00:23:44.356 rmmod nvme_fabrics 00:23:44.356 rmmod nvme_keyring 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 98506 ']' 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 98506 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98506 ']' 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98506 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98506 00:23:44.356 killing process with pid 98506 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98506' 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98506 00:23:44.356 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98506 00:23:44.614 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.614 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:44.614 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:44.614 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:44.614 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:44.614 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:23:44.614 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:23:44.614 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:44.614 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:44.614 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:44.615 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:44.615 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:44.615 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:44.615 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:44.615 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:44.615 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:44.615 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:44.615 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:44.615 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:44.873 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:44.873 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:44.873 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:44.873 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:44.873 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.873 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.873 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.873 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:44.873 00:23:44.873 real 0m45.047s 00:23:44.873 user 2m11.507s 00:23:44.873 sys 0m5.340s 00:23:44.873 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.873 ************************************ 00:23:44.873 END TEST nvmf_timeout 00:23:44.873 ************************************ 00:23:44.873 16:59:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:44.873 16:59:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:44.873 16:59:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:44.873 00:23:44.873 real 5m43.036s 00:23:44.873 user 16m2.953s 00:23:44.873 sys 1m16.437s 00:23:44.873 16:59:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.873 ************************************ 00:23:44.873 END TEST nvmf_host 00:23:44.873 ************************************ 00:23:44.873 16:59:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.873 16:59:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:44.873 16:59:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:23:44.873 00:23:44.873 real 15m7.242s 00:23:44.873 user 39m45.808s 00:23:44.873 sys 4m0.019s 00:23:44.873 16:59:08 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.873 ************************************ 00:23:44.873 END TEST nvmf_tcp 00:23:44.873 ************************************ 00:23:44.873 16:59:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:44.873 16:59:08 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:23:44.873 16:59:08 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:44.873 16:59:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:44.873 16:59:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.873 16:59:08 -- common/autotest_common.sh@10 -- # set +x 00:23:44.873 ************************************ 00:23:44.873 START TEST nvmf_dif 00:23:44.873 ************************************ 00:23:44.873 16:59:08 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:45.131 * Looking for test storage... 00:23:45.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:45.131 16:59:08 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:45.131 16:59:08 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:23:45.131 16:59:08 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:45.131 16:59:08 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:45.131 16:59:08 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:23:45.131 16:59:08 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:45.131 16:59:08 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:45.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.131 --rc genhtml_branch_coverage=1 00:23:45.131 --rc genhtml_function_coverage=1 00:23:45.131 --rc genhtml_legend=1 00:23:45.131 --rc geninfo_all_blocks=1 00:23:45.131 --rc geninfo_unexecuted_blocks=1 00:23:45.131 00:23:45.131 ' 00:23:45.131 16:59:08 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:45.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.131 --rc genhtml_branch_coverage=1 00:23:45.131 --rc genhtml_function_coverage=1 00:23:45.131 --rc genhtml_legend=1 00:23:45.131 --rc geninfo_all_blocks=1 00:23:45.131 --rc geninfo_unexecuted_blocks=1 00:23:45.131 00:23:45.131 ' 00:23:45.131 16:59:08 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:45.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.131 --rc genhtml_branch_coverage=1 00:23:45.131 --rc genhtml_function_coverage=1 00:23:45.131 --rc genhtml_legend=1 00:23:45.131 --rc geninfo_all_blocks=1 00:23:45.131 --rc geninfo_unexecuted_blocks=1 00:23:45.131 00:23:45.131 ' 00:23:45.131 16:59:08 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:45.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.131 --rc genhtml_branch_coverage=1 00:23:45.131 --rc genhtml_function_coverage=1 00:23:45.131 --rc genhtml_legend=1 00:23:45.131 --rc geninfo_all_blocks=1 00:23:45.132 --rc geninfo_unexecuted_blocks=1 00:23:45.132 00:23:45.132 ' 00:23:45.132 16:59:08 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:45.132 16:59:08 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:23:45.132 16:59:08 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.132 16:59:08 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.132 16:59:08 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.132 16:59:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.132 16:59:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.132 16:59:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.132 16:59:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:45.132 16:59:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:45.132 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:45.132 16:59:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:45.132 16:59:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:45.132 16:59:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:45.132 16:59:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:45.132 16:59:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.132 16:59:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:45.132 16:59:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:45.132 Cannot find device "nvmf_init_br" 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:45.132 Cannot find device "nvmf_init_br2" 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:45.132 Cannot find device "nvmf_tgt_br" 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@164 -- # true 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:45.132 Cannot find device "nvmf_tgt_br2" 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@165 -- # true 00:23:45.132 16:59:08 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:45.390 Cannot find device "nvmf_init_br" 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@166 -- # true 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:45.390 Cannot find device "nvmf_init_br2" 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@167 -- # true 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:45.390 Cannot find device "nvmf_tgt_br" 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@168 -- # true 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:45.390 Cannot find device "nvmf_tgt_br2" 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@169 -- # true 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:45.390 Cannot find device "nvmf_br" 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@170 -- # true 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:45.390 Cannot find device "nvmf_init_if" 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@171 -- # true 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:45.390 Cannot find device "nvmf_init_if2" 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@172 -- # true 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:45.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:45.390 16:59:08 nvmf_dif -- nvmf/common.sh@173 -- # true 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:45.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@174 -- # true 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:45.390 16:59:09 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:45.648 16:59:09 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:45.648 16:59:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:45.648 16:59:09 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:45.648 16:59:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:45.648 16:59:09 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:45.648 16:59:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:45.648 16:59:09 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:45.648 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:45.648 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:23:45.648 00:23:45.648 --- 10.0.0.3 ping statistics --- 00:23:45.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.648 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:23:45.648 16:59:09 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:45.648 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:45.648 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:23:45.648 00:23:45.648 --- 10.0.0.4 ping statistics --- 00:23:45.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.648 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:23:45.648 16:59:09 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:45.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:45.648 00:23:45.648 --- 10.0.0.1 ping statistics --- 00:23:45.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.648 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:45.648 16:59:09 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:45.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:23:45.648 00:23:45.648 --- 10.0.0.2 ping statistics --- 00:23:45.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.648 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:23:45.648 16:59:09 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.648 16:59:09 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:23:45.648 16:59:09 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:23:45.648 16:59:09 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:45.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:45.907 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:45.907 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:45.907 16:59:09 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.907 16:59:09 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:45.907 16:59:09 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:45.907 16:59:09 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.907 16:59:09 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:45.907 16:59:09 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:45.907 16:59:09 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:45.907 16:59:09 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:45.907 16:59:09 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:45.907 16:59:09 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.907 16:59:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:45.907 16:59:09 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=99461 00:23:45.907 16:59:09 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:45.907 16:59:09 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 99461 00:23:45.907 16:59:09 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 99461 ']' 00:23:45.907 16:59:09 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.907 16:59:09 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.907 16:59:09 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.907 16:59:09 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.907 16:59:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:46.165 [2024-11-29 16:59:09.719746] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:23:46.165 [2024-11-29 16:59:09.719859] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.165 [2024-11-29 16:59:09.847109] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:46.165 [2024-11-29 16:59:09.879947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.165 [2024-11-29 16:59:09.903376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.165 [2024-11-29 16:59:09.903444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.165 [2024-11-29 16:59:09.903458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.165 [2024-11-29 16:59:09.903468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.165 [2024-11-29 16:59:09.903477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.165 [2024-11-29 16:59:09.903847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.165 [2024-11-29 16:59:09.939580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:46.424 16:59:09 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.424 16:59:09 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:23:46.424 16:59:09 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:46.424 16:59:09 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.424 16:59:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:46.424 16:59:10 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.424 16:59:10 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:46.424 16:59:10 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:46.424 16:59:10 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.424 16:59:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:46.424 [2024-11-29 16:59:10.040369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.424 16:59:10 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.424 16:59:10 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:46.424 16:59:10 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:46.424 16:59:10 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.424 16:59:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:46.424 ************************************ 00:23:46.424 START TEST fio_dif_1_default 00:23:46.424 ************************************ 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:46.424 bdev_null0 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:46.424 [2024-11-29 16:59:10.088530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:46.424 16:59:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:46.424 { 00:23:46.424 "params": { 00:23:46.424 "name": "Nvme$subsystem", 00:23:46.424 "trtype": "$TEST_TRANSPORT", 00:23:46.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.424 "adrfam": "ipv4", 00:23:46.424 "trsvcid": "$NVMF_PORT", 00:23:46.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.424 "hdgst": ${hdgst:-false}, 00:23:46.424 "ddgst": ${ddgst:-false} 00:23:46.424 }, 00:23:46.424 "method": "bdev_nvme_attach_controller" 00:23:46.424 } 00:23:46.424 EOF 00:23:46.424 )") 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:46.425 "params": { 00:23:46.425 "name": "Nvme0", 00:23:46.425 "trtype": "tcp", 00:23:46.425 "traddr": "10.0.0.3", 00:23:46.425 "adrfam": "ipv4", 00:23:46.425 "trsvcid": "4420", 00:23:46.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:46.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:46.425 "hdgst": false, 00:23:46.425 "ddgst": false 00:23:46.425 }, 00:23:46.425 "method": "bdev_nvme_attach_controller" 00:23:46.425 }' 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:46.425 16:59:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:46.683 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:46.683 fio-3.35 00:23:46.683 Starting 1 thread 00:23:58.890 00:23:58.890 filename0: (groupid=0, jobs=1): err= 0: pid=99516: Fri Nov 29 16:59:20 2024 00:23:58.890 read: IOPS=9841, BW=38.4MiB/s (40.3MB/s)(384MiB/10001msec) 00:23:58.890 slat (usec): min=5, max=353, avg= 7.83, stdev= 4.55 00:23:58.890 clat (usec): min=314, max=2273, avg=383.34, stdev=45.71 00:23:58.890 lat (usec): min=320, max=2301, avg=391.17, stdev=46.76 00:23:58.890 clat percentiles (usec): 00:23:58.890 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 347], 00:23:58.891 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 388], 00:23:58.891 | 70.00th=[ 400], 80.00th=[ 412], 90.00th=[ 437], 95.00th=[ 457], 00:23:58.891 | 99.00th=[ 515], 99.50th=[ 553], 99.90th=[ 701], 99.95th=[ 783], 00:23:58.891 | 99.99th=[ 1106] 00:23:58.891 bw ( KiB/s): min=37376, max=40576, per=100.00%, avg=39440.84, stdev=920.51, samples=19 00:23:58.891 iops : min= 9344, max=10144, avg=9860.21, stdev=230.13, samples=19 00:23:58.891 lat (usec) : 500=98.61%, 750=1.32%, 1000=0.05% 00:23:58.891 lat (msec) : 2=0.01%, 4=0.01% 00:23:58.891 cpu : usr=84.65%, sys=13.37%, ctx=116, majf=0, minf=9 00:23:58.891 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.891 issued rwts: total=98428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.891 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:58.891 00:23:58.891 Run status group 0 (all jobs): 00:23:58.891 READ: bw=38.4MiB/s (40.3MB/s), 38.4MiB/s-38.4MiB/s (40.3MB/s-40.3MB/s), io=384MiB (403MB), run=10001-10001msec 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.891 00:23:58.891 real 0m10.906s 00:23:58.891 user 0m9.052s 00:23:58.891 sys 0m1.567s 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.891 ************************************ 00:23:58.891 END TEST fio_dif_1_default 00:23:58.891 16:59:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:58.891 ************************************ 00:23:58.891 16:59:21 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:58.891 16:59:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:58.891 16:59:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.891 16:59:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:58.891 ************************************ 00:23:58.891 START TEST fio_dif_1_multi_subsystems 00:23:58.891 ************************************ 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:58.891 bdev_null0 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:58.891 [2024-11-29 16:59:21.050757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:58.891 bdev_null1 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:58.891 { 00:23:58.891 "params": { 00:23:58.891 "name": "Nvme$subsystem", 00:23:58.891 "trtype": "$TEST_TRANSPORT", 00:23:58.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.891 "adrfam": "ipv4", 00:23:58.891 "trsvcid": "$NVMF_PORT", 00:23:58.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.891 "hdgst": ${hdgst:-false}, 00:23:58.891 "ddgst": ${ddgst:-false} 00:23:58.891 }, 00:23:58.891 "method": "bdev_nvme_attach_controller" 00:23:58.891 } 00:23:58.891 EOF 00:23:58.891 )") 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:58.891 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:58.892 { 00:23:58.892 "params": { 00:23:58.892 "name": "Nvme$subsystem", 00:23:58.892 "trtype": "$TEST_TRANSPORT", 00:23:58.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.892 "adrfam": "ipv4", 00:23:58.892 "trsvcid": "$NVMF_PORT", 00:23:58.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.892 "hdgst": ${hdgst:-false}, 00:23:58.892 "ddgst": ${ddgst:-false} 00:23:58.892 }, 00:23:58.892 "method": "bdev_nvme_attach_controller" 00:23:58.892 } 00:23:58.892 EOF 00:23:58.892 )") 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:58.892 "params": { 00:23:58.892 "name": "Nvme0", 00:23:58.892 "trtype": "tcp", 00:23:58.892 "traddr": "10.0.0.3", 00:23:58.892 "adrfam": "ipv4", 00:23:58.892 "trsvcid": "4420", 00:23:58.892 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.892 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:58.892 "hdgst": false, 00:23:58.892 "ddgst": false 00:23:58.892 }, 00:23:58.892 "method": "bdev_nvme_attach_controller" 00:23:58.892 },{ 00:23:58.892 "params": { 00:23:58.892 "name": "Nvme1", 00:23:58.892 "trtype": "tcp", 00:23:58.892 "traddr": "10.0.0.3", 00:23:58.892 "adrfam": "ipv4", 00:23:58.892 "trsvcid": "4420", 00:23:58.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.892 "hdgst": false, 00:23:58.892 "ddgst": false 00:23:58.892 }, 00:23:58.892 "method": "bdev_nvme_attach_controller" 00:23:58.892 }' 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:58.892 16:59:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.892 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:58.892 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:58.892 fio-3.35 00:23:58.892 Starting 2 threads 00:24:08.895 00:24:08.895 filename0: (groupid=0, jobs=1): err= 0: pid=99677: Fri Nov 29 16:59:31 2024 00:24:08.895 read: IOPS=5272, BW=20.6MiB/s (21.6MB/s)(206MiB/10001msec) 00:24:08.895 slat (nsec): min=6396, max=56182, avg=12531.57, stdev=4273.46 00:24:08.895 clat (usec): min=566, max=3329, avg=725.00, stdev=62.48 00:24:08.895 lat (usec): min=573, max=3355, avg=737.54, stdev=63.43 00:24:08.895 clat percentiles (usec): 00:24:08.895 | 1.00th=[ 611], 5.00th=[ 644], 10.00th=[ 660], 20.00th=[ 685], 00:24:08.895 | 30.00th=[ 693], 40.00th=[ 709], 50.00th=[ 717], 60.00th=[ 734], 00:24:08.895 | 70.00th=[ 750], 80.00th=[ 766], 90.00th=[ 799], 95.00th=[ 832], 00:24:08.895 | 99.00th=[ 906], 99.50th=[ 938], 99.90th=[ 996], 99.95th=[ 1172], 00:24:08.895 | 99.99th=[ 1352] 00:24:08.895 bw ( KiB/s): min=20640, max=21472, per=50.06%, avg=21116.16, stdev=261.72, samples=19 00:24:08.895 iops : min= 5160, max= 5368, avg=5279.00, stdev=65.47, samples=19 00:24:08.895 lat (usec) : 750=72.45%, 1000=27.46% 00:24:08.895 lat (msec) : 2=0.09%, 4=0.01% 00:24:08.895 cpu : usr=89.80%, sys=8.83%, ctx=8, majf=0, minf=0 00:24:08.895 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:08.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.895 issued rwts: total=52728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.895 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:08.895 filename1: (groupid=0, jobs=1): err= 0: pid=99678: Fri Nov 29 16:59:31 2024 00:24:08.895 read: IOPS=5272, BW=20.6MiB/s (21.6MB/s)(206MiB/10001msec) 00:24:08.895 slat (nsec): min=6334, max=79539, avg=12616.30, stdev=4347.51 00:24:08.895 clat (usec): min=394, max=2815, avg=724.08, stdev=56.08 00:24:08.895 lat (usec): min=401, max=2842, avg=736.69, stdev=56.75 00:24:08.895 clat percentiles (usec): 00:24:08.895 | 1.00th=[ 644], 5.00th=[ 660], 10.00th=[ 668], 20.00th=[ 685], 00:24:08.895 | 30.00th=[ 693], 40.00th=[ 701], 50.00th=[ 717], 60.00th=[ 725], 00:24:08.895 | 70.00th=[ 742], 80.00th=[ 758], 90.00th=[ 791], 95.00th=[ 824], 00:24:08.895 | 99.00th=[ 898], 99.50th=[ 922], 99.90th=[ 996], 99.95th=[ 1123], 00:24:08.895 | 99.99th=[ 1336] 00:24:08.895 bw ( KiB/s): min=20640, max=21472, per=50.06%, avg=21116.16, stdev=264.74, samples=19 00:24:08.895 iops : min= 5160, max= 5368, avg=5279.00, stdev=66.22, samples=19 00:24:08.895 lat (usec) : 500=0.01%, 750=75.32%, 1000=24.58% 00:24:08.895 lat (msec) : 2=0.09%, 4=0.01% 00:24:08.895 cpu : usr=90.23%, sys=8.42%, ctx=82, majf=0, minf=1 00:24:08.895 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:08.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.895 issued rwts: total=52732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.895 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:08.895 00:24:08.895 Run status group 0 (all jobs): 00:24:08.895 READ: bw=41.2MiB/s (43.2MB/s), 20.6MiB/s-20.6MiB/s (21.6MB/s-21.6MB/s), io=412MiB (432MB), run=10001-10001msec 00:24:08.895 16:59:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:24:08.895 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:24:08.895 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:08.895 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:08.895 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:24:08.895 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:08.895 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.895 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:08.895 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.895 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.896 00:24:08.896 real 0m11.015s 00:24:08.896 user 0m18.673s 00:24:08.896 sys 0m1.980s 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.896 16:59:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:08.896 ************************************ 00:24:08.896 END TEST fio_dif_1_multi_subsystems 00:24:08.896 ************************************ 00:24:08.896 16:59:32 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:24:08.896 16:59:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:08.896 16:59:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.896 16:59:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:08.896 ************************************ 00:24:08.896 START TEST fio_dif_rand_params 00:24:08.896 ************************************ 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.896 bdev_null0 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.896 [2024-11-29 16:59:32.123679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:08.896 { 00:24:08.896 "params": { 00:24:08.896 "name": "Nvme$subsystem", 00:24:08.896 "trtype": "$TEST_TRANSPORT", 00:24:08.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.896 "adrfam": "ipv4", 00:24:08.896 "trsvcid": "$NVMF_PORT", 00:24:08.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.896 "hdgst": ${hdgst:-false}, 00:24:08.896 "ddgst": ${ddgst:-false} 00:24:08.896 }, 00:24:08.896 "method": "bdev_nvme_attach_controller" 00:24:08.896 } 00:24:08.896 EOF 00:24:08.896 )") 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:08.896 "params": { 00:24:08.896 "name": "Nvme0", 00:24:08.896 "trtype": "tcp", 00:24:08.896 "traddr": "10.0.0.3", 00:24:08.896 "adrfam": "ipv4", 00:24:08.896 "trsvcid": "4420", 00:24:08.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:08.896 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:08.896 "hdgst": false, 00:24:08.896 "ddgst": false 00:24:08.896 }, 00:24:08.896 "method": "bdev_nvme_attach_controller" 00:24:08.896 }' 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:08.896 16:59:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:08.897 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:08.897 ... 00:24:08.897 fio-3.35 00:24:08.897 Starting 3 threads 00:24:14.170 00:24:14.170 filename0: (groupid=0, jobs=1): err= 0: pid=99834: Fri Nov 29 16:59:37 2024 00:24:14.170 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(174MiB/5005msec) 00:24:14.170 slat (nsec): min=6688, max=44476, avg=9580.94, stdev=3894.29 00:24:14.170 clat (usec): min=7186, max=13024, avg=10762.05, stdev=480.36 00:24:14.170 lat (usec): min=7193, max=13050, avg=10771.63, stdev=480.61 00:24:14.170 clat percentiles (usec): 00:24:14.170 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10421], 20.00th=[10552], 00:24:14.170 | 30.00th=[10552], 40.00th=[10552], 50.00th=[10552], 60.00th=[10683], 00:24:14.170 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11338], 95.00th=[11731], 00:24:14.170 | 99.00th=[12649], 99.50th=[12780], 99.90th=[13042], 99.95th=[13042], 00:24:14.170 | 99.99th=[13042] 00:24:14.170 bw ( KiB/s): min=34560, max=36096, per=33.35%, avg=35584.00, stdev=543.06, samples=9 00:24:14.170 iops : min= 270, max= 282, avg=278.00, stdev= 4.24, samples=9 00:24:14.170 lat (msec) : 10=0.22%, 20=99.78% 00:24:14.170 cpu : usr=91.65%, sys=7.79%, ctx=8, majf=0, minf=9 00:24:14.170 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.170 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.170 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:14.170 filename0: (groupid=0, jobs=1): err= 0: pid=99835: Fri Nov 29 16:59:37 2024 00:24:14.170 read: IOPS=277, BW=34.7MiB/s (36.4MB/s)(174MiB/5009msec) 00:24:14.170 slat (nsec): min=6840, max=46859, avg=13702.77, stdev=4426.59 00:24:14.170 clat (usec): min=10315, max=13613, avg=10763.06, stdev=460.83 00:24:14.170 lat (usec): min=10327, max=13636, avg=10776.76, stdev=461.18 00:24:14.170 clat percentiles (usec): 00:24:14.170 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10421], 20.00th=[10421], 00:24:14.170 | 30.00th=[10552], 40.00th=[10552], 50.00th=[10552], 60.00th=[10683], 00:24:14.170 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:24:14.170 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13566], 99.95th=[13566], 00:24:14.170 | 99.99th=[13566] 00:24:14.170 bw ( KiB/s): min=34560, max=36864, per=33.33%, avg=35565.30, stdev=718.34, samples=10 00:24:14.170 iops : min= 270, max= 288, avg=277.80, stdev= 5.69, samples=10 00:24:14.170 lat (msec) : 20=100.00% 00:24:14.170 cpu : usr=91.07%, sys=8.41%, ctx=5, majf=0, minf=9 00:24:14.170 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.171 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.171 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:14.171 filename0: (groupid=0, jobs=1): err= 0: pid=99836: Fri Nov 29 16:59:37 2024 00:24:14.171 read: IOPS=277, BW=34.7MiB/s (36.4MB/s)(174MiB/5010msec) 00:24:14.171 slat (nsec): min=4771, max=43921, avg=13151.79, stdev=4146.46 00:24:14.171 clat (usec): min=10319, max=14948, avg=10768.00, stdev=481.88 00:24:14.171 lat (usec): min=10331, max=14965, avg=10781.16, stdev=482.15 00:24:14.171 clat percentiles (usec): 00:24:14.171 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10421], 20.00th=[10421], 00:24:14.171 | 30.00th=[10552], 40.00th=[10552], 50.00th=[10552], 60.00th=[10683], 00:24:14.171 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:24:14.171 | 99.00th=[12518], 99.50th=[12649], 99.90th=[14877], 99.95th=[15008], 00:24:14.171 | 99.99th=[15008] 00:24:14.171 bw ( KiB/s): min=34491, max=36864, per=33.32%, avg=35551.50, stdev=739.34, samples=10 00:24:14.171 iops : min= 269, max= 288, avg=277.70, stdev= 5.85, samples=10 00:24:14.171 lat (msec) : 20=100.00% 00:24:14.171 cpu : usr=90.90%, sys=8.60%, ctx=7, majf=0, minf=9 00:24:14.171 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.171 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.171 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:14.171 00:24:14.171 Run status group 0 (all jobs): 00:24:14.171 READ: bw=104MiB/s (109MB/s), 34.7MiB/s-34.8MiB/s (36.4MB/s-36.5MB/s), io=522MiB (547MB), run=5005-5010msec 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:24:14.432 16:59:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.432 bdev_null0 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.432 [2024-11-29 16:59:38.032730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.432 bdev_null1 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.432 bdev_null2 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:14.432 { 00:24:14.432 "params": { 00:24:14.432 "name": "Nvme$subsystem", 00:24:14.432 "trtype": "$TEST_TRANSPORT", 00:24:14.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.432 "adrfam": "ipv4", 00:24:14.432 "trsvcid": "$NVMF_PORT", 00:24:14.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.432 "hdgst": ${hdgst:-false}, 00:24:14.432 "ddgst": ${ddgst:-false} 00:24:14.432 }, 00:24:14.432 "method": "bdev_nvme_attach_controller" 00:24:14.432 } 00:24:14.432 EOF 00:24:14.432 )") 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:14.432 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:14.433 { 00:24:14.433 "params": { 00:24:14.433 "name": "Nvme$subsystem", 00:24:14.433 "trtype": "$TEST_TRANSPORT", 00:24:14.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.433 "adrfam": "ipv4", 00:24:14.433 "trsvcid": "$NVMF_PORT", 00:24:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.433 "hdgst": ${hdgst:-false}, 00:24:14.433 "ddgst": ${ddgst:-false} 00:24:14.433 }, 00:24:14.433 "method": "bdev_nvme_attach_controller" 00:24:14.433 } 00:24:14.433 EOF 00:24:14.433 )") 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:14.433 { 00:24:14.433 "params": { 00:24:14.433 "name": "Nvme$subsystem", 00:24:14.433 "trtype": "$TEST_TRANSPORT", 00:24:14.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.433 "adrfam": "ipv4", 00:24:14.433 "trsvcid": "$NVMF_PORT", 00:24:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.433 "hdgst": ${hdgst:-false}, 00:24:14.433 "ddgst": ${ddgst:-false} 00:24:14.433 }, 00:24:14.433 "method": "bdev_nvme_attach_controller" 00:24:14.433 } 00:24:14.433 EOF 00:24:14.433 )") 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:14.433 "params": { 00:24:14.433 "name": "Nvme0", 00:24:14.433 "trtype": "tcp", 00:24:14.433 "traddr": "10.0.0.3", 00:24:14.433 "adrfam": "ipv4", 00:24:14.433 "trsvcid": "4420", 00:24:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:14.433 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:14.433 "hdgst": false, 00:24:14.433 "ddgst": false 00:24:14.433 }, 00:24:14.433 "method": "bdev_nvme_attach_controller" 00:24:14.433 },{ 00:24:14.433 "params": { 00:24:14.433 "name": "Nvme1", 00:24:14.433 "trtype": "tcp", 00:24:14.433 "traddr": "10.0.0.3", 00:24:14.433 "adrfam": "ipv4", 00:24:14.433 "trsvcid": "4420", 00:24:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.433 "hdgst": false, 00:24:14.433 "ddgst": false 00:24:14.433 }, 00:24:14.433 "method": "bdev_nvme_attach_controller" 00:24:14.433 },{ 00:24:14.433 "params": { 00:24:14.433 "name": "Nvme2", 00:24:14.433 "trtype": "tcp", 00:24:14.433 "traddr": "10.0.0.3", 00:24:14.433 "adrfam": "ipv4", 00:24:14.433 "trsvcid": "4420", 00:24:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:14.433 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:14.433 "hdgst": false, 00:24:14.433 "ddgst": false 00:24:14.433 }, 00:24:14.433 "method": "bdev_nvme_attach_controller" 00:24:14.433 }' 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:14.433 16:59:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:14.693 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:14.693 ... 00:24:14.693 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:14.693 ... 00:24:14.693 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:14.693 ... 00:24:14.693 fio-3.35 00:24:14.693 Starting 24 threads 00:24:26.908 00:24:26.908 filename0: (groupid=0, jobs=1): err= 0: pid=99931: Fri Nov 29 16:59:49 2024 00:24:26.908 read: IOPS=199, BW=798KiB/s (817kB/s)(8004KiB/10036msec) 00:24:26.908 slat (usec): min=4, max=8025, avg=22.23, stdev=253.26 00:24:26.908 clat (msec): min=32, max=128, avg=80.09, stdev=22.66 00:24:26.908 lat (msec): min=32, max=128, avg=80.12, stdev=22.66 00:24:26.908 clat percentiles (msec): 00:24:26.908 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:24:26.908 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:24:26.908 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 109], 95.00th=[ 121], 00:24:26.908 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 129], 99.95th=[ 129], 00:24:26.908 | 99.99th=[ 129] 00:24:26.908 bw ( KiB/s): min= 664, max= 1000, per=4.31%, avg=794.00, stdev=130.45, samples=20 00:24:26.908 iops : min= 166, max= 250, avg=198.50, stdev=32.61, samples=20 00:24:26.908 lat (msec) : 50=13.04%, 100=62.72%, 250=24.24% 00:24:26.908 cpu : usr=31.24%, sys=1.84%, ctx=860, majf=0, minf=9 00:24:26.908 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:26.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.908 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.908 issued rwts: total=2001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.908 filename0: (groupid=0, jobs=1): err= 0: pid=99932: Fri Nov 29 16:59:49 2024 00:24:26.908 read: IOPS=194, BW=777KiB/s (796kB/s)(7784KiB/10016msec) 00:24:26.908 slat (usec): min=4, max=4030, avg=17.13, stdev=91.16 00:24:26.908 clat (msec): min=26, max=143, avg=82.18, stdev=24.20 00:24:26.908 lat (msec): min=26, max=143, avg=82.20, stdev=24.20 00:24:26.908 clat percentiles (msec): 00:24:26.908 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 57], 00:24:26.908 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 86], 00:24:26.908 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 113], 95.00th=[ 120], 00:24:26.908 | 99.00th=[ 128], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:24:26.908 | 99.99th=[ 144] 00:24:26.908 bw ( KiB/s): min= 528, max= 1024, per=4.21%, avg=774.40, stdev=169.02, samples=20 00:24:26.908 iops : min= 132, max= 256, avg=193.60, stdev=42.26, samples=20 00:24:26.908 lat (msec) : 50=11.51%, 100=56.68%, 250=31.81% 00:24:26.908 cpu : usr=40.97%, sys=2.40%, ctx=1354, majf=0, minf=9 00:24:26.908 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=76.4%, 16=14.9%, 32=0.0%, >=64=0.0% 00:24:26.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.908 complete : 0=0.0%, 4=88.7%, 8=9.7%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.908 issued rwts: total=1946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.908 filename0: (groupid=0, jobs=1): err= 0: pid=99933: Fri Nov 29 16:59:49 2024 00:24:26.908 read: IOPS=194, BW=777KiB/s (795kB/s)(7808KiB/10054msec) 00:24:26.908 slat (usec): min=6, max=8027, avg=28.79, stdev=299.82 00:24:26.908 clat (msec): min=11, max=150, avg=82.13, stdev=26.54 00:24:26.908 lat (msec): min=11, max=151, avg=82.15, stdev=26.54 00:24:26.908 clat percentiles (msec): 00:24:26.908 | 1.00th=[ 14], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 61], 00:24:26.908 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 87], 00:24:26.908 | 70.00th=[ 103], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 121], 00:24:26.908 | 99.00th=[ 134], 99.50th=[ 138], 99.90th=[ 150], 99.95th=[ 153], 00:24:26.908 | 99.99th=[ 153] 00:24:26.908 bw ( KiB/s): min= 544, max= 1280, per=4.22%, avg=776.15, stdev=185.52, samples=20 00:24:26.908 iops : min= 136, max= 320, avg=194.00, stdev=46.40, samples=20 00:24:26.908 lat (msec) : 20=2.36%, 50=9.27%, 100=57.33%, 250=31.05% 00:24:26.908 cpu : usr=34.89%, sys=1.77%, ctx=1029, majf=0, minf=9 00:24:26.908 IO depths : 1=0.1%, 2=1.4%, 4=5.3%, 8=77.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:26.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.908 complete : 0=0.0%, 4=88.7%, 8=10.1%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.908 issued rwts: total=1952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.908 filename0: (groupid=0, jobs=1): err= 0: pid=99934: Fri Nov 29 16:59:49 2024 00:24:26.908 read: IOPS=204, BW=818KiB/s (838kB/s)(8188KiB/10008msec) 00:24:26.908 slat (usec): min=3, max=8026, avg=27.65, stdev=298.09 00:24:26.908 clat (msec): min=12, max=127, avg=78.11, stdev=23.38 00:24:26.908 lat (msec): min=12, max=127, avg=78.13, stdev=23.38 00:24:26.908 clat percentiles (msec): 00:24:26.908 | 1.00th=[ 32], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:24:26.908 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 82], 00:24:26.908 | 70.00th=[ 87], 80.00th=[ 106], 90.00th=[ 111], 95.00th=[ 118], 00:24:26.908 | 99.00th=[ 123], 99.50th=[ 125], 99.90th=[ 128], 99.95th=[ 129], 00:24:26.908 | 99.99th=[ 129] 00:24:26.908 bw ( KiB/s): min= 616, max= 1072, per=4.42%, avg=814.74, stdev=136.27, samples=19 00:24:26.908 iops : min= 154, max= 268, avg=203.68, stdev=34.07, samples=19 00:24:26.908 lat (msec) : 20=0.64%, 50=13.92%, 100=61.41%, 250=24.04% 00:24:26.908 cpu : usr=36.41%, sys=2.19%, ctx=1267, majf=0, minf=9 00:24:26.908 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:26.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.908 complete : 0=0.0%, 4=86.7%, 8=13.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.908 issued rwts: total=2047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.908 filename0: (groupid=0, jobs=1): err= 0: pid=99935: Fri Nov 29 16:59:49 2024 00:24:26.908 read: IOPS=182, BW=732KiB/s (749kB/s)(7328KiB/10012msec) 00:24:26.908 slat (usec): min=4, max=8022, avg=21.14, stdev=213.48 00:24:26.908 clat (msec): min=37, max=159, avg=87.29, stdev=23.38 00:24:26.908 lat (msec): min=37, max=159, avg=87.31, stdev=23.38 00:24:26.908 clat percentiles (msec): 00:24:26.908 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 68], 00:24:26.908 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 103], 00:24:26.908 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 116], 95.00th=[ 121], 00:24:26.908 | 99.00th=[ 138], 99.50th=[ 138], 99.90th=[ 159], 99.95th=[ 159], 00:24:26.908 | 99.99th=[ 159] 00:24:26.908 bw ( KiB/s): min= 512, max= 1024, per=3.96%, avg=729.68, stdev=152.39, samples=19 00:24:26.908 iops : min= 128, max= 256, avg=182.42, stdev=38.10, samples=19 00:24:26.908 lat (msec) : 50=7.10%, 100=50.76%, 250=42.14% 00:24:26.908 cpu : usr=41.30%, sys=2.30%, ctx=1232, majf=0, minf=9 00:24:26.908 IO depths : 1=0.1%, 2=3.2%, 4=12.4%, 8=70.3%, 16=14.0%, 32=0.0%, >=64=0.0% 00:24:26.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.908 complete : 0=0.0%, 4=90.4%, 8=6.9%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.908 issued rwts: total=1832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.908 filename0: (groupid=0, jobs=1): err= 0: pid=99936: Fri Nov 29 16:59:49 2024 00:24:26.908 read: IOPS=183, BW=733KiB/s (751kB/s)(7340KiB/10011msec) 00:24:26.908 slat (usec): min=3, max=8024, avg=27.50, stdev=295.57 00:24:26.908 clat (msec): min=12, max=155, avg=87.13, stdev=25.65 00:24:26.908 lat (msec): min=12, max=156, avg=87.16, stdev=25.65 00:24:26.908 clat percentiles (msec): 00:24:26.908 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 64], 00:24:26.908 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 96], 00:24:26.908 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 122], 00:24:26.908 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:24:26.908 | 99.99th=[ 157] 00:24:26.908 bw ( KiB/s): min= 512, max= 1024, per=3.94%, avg=725.47, stdev=168.72, samples=19 00:24:26.908 iops : min= 128, max= 256, avg=181.37, stdev=42.18, samples=19 00:24:26.908 lat (msec) : 20=0.71%, 50=6.87%, 100=57.66%, 250=34.77% 00:24:26.909 cpu : usr=34.10%, sys=1.87%, ctx=1089, majf=0, minf=9 00:24:26.909 IO depths : 1=0.1%, 2=2.7%, 4=10.6%, 8=72.1%, 16=14.6%, 32=0.0%, >=64=0.0% 00:24:26.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 complete : 0=0.0%, 4=90.1%, 8=7.6%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 issued rwts: total=1835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.909 filename0: (groupid=0, jobs=1): err= 0: pid=99937: Fri Nov 29 16:59:49 2024 00:24:26.909 read: IOPS=185, BW=742KiB/s (760kB/s)(7464KiB/10053msec) 00:24:26.909 slat (usec): min=6, max=8034, avg=42.66, stdev=474.65 00:24:26.909 clat (msec): min=19, max=158, avg=85.94, stdev=26.89 00:24:26.909 lat (msec): min=19, max=158, avg=85.98, stdev=26.90 00:24:26.909 clat percentiles (msec): 00:24:26.909 | 1.00th=[ 25], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 63], 00:24:26.909 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 96], 00:24:26.909 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 131], 00:24:26.909 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 159], 00:24:26.909 | 99.99th=[ 159] 00:24:26.909 bw ( KiB/s): min= 512, max= 1145, per=4.02%, avg=739.65, stdev=177.03, samples=20 00:24:26.909 iops : min= 128, max= 286, avg=184.90, stdev=44.23, samples=20 00:24:26.909 lat (msec) : 20=0.75%, 50=10.40%, 100=53.27%, 250=35.58% 00:24:26.909 cpu : usr=31.32%, sys=1.98%, ctx=870, majf=0, minf=9 00:24:26.909 IO depths : 1=0.1%, 2=1.8%, 4=7.4%, 8=75.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:26.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 complete : 0=0.0%, 4=89.5%, 8=8.9%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 issued rwts: total=1866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.909 filename0: (groupid=0, jobs=1): err= 0: pid=99938: Fri Nov 29 16:59:49 2024 00:24:26.909 read: IOPS=184, BW=739KiB/s (757kB/s)(7416KiB/10038msec) 00:24:26.909 slat (usec): min=6, max=8024, avg=20.22, stdev=191.89 00:24:26.909 clat (msec): min=24, max=151, avg=86.46, stdev=24.09 00:24:26.909 lat (msec): min=24, max=151, avg=86.48, stdev=24.09 00:24:26.909 clat percentiles (msec): 00:24:26.909 | 1.00th=[ 31], 5.00th=[ 46], 10.00th=[ 53], 20.00th=[ 68], 00:24:26.909 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 99], 00:24:26.909 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 115], 95.00th=[ 122], 00:24:26.909 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:24:26.909 | 99.99th=[ 153] 00:24:26.909 bw ( KiB/s): min= 512, max= 1136, per=3.99%, avg=735.20, stdev=170.93, samples=20 00:24:26.909 iops : min= 128, max= 284, avg=183.80, stdev=42.73, samples=20 00:24:26.909 lat (msec) : 50=8.09%, 100=53.72%, 250=38.19% 00:24:26.909 cpu : usr=40.08%, sys=2.49%, ctx=1537, majf=0, minf=9 00:24:26.909 IO depths : 1=0.1%, 2=3.3%, 4=13.5%, 8=69.1%, 16=14.0%, 32=0.0%, >=64=0.0% 00:24:26.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 complete : 0=0.0%, 4=90.8%, 8=6.2%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 issued rwts: total=1854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.909 filename1: (groupid=0, jobs=1): err= 0: pid=99939: Fri Nov 29 16:59:49 2024 00:24:26.909 read: IOPS=193, BW=772KiB/s (791kB/s)(7752KiB/10035msec) 00:24:26.909 slat (usec): min=4, max=8032, avg=22.30, stdev=257.49 00:24:26.909 clat (msec): min=31, max=131, avg=82.71, stdev=23.00 00:24:26.909 lat (msec): min=31, max=131, avg=82.73, stdev=22.99 00:24:26.909 clat percentiles (msec): 00:24:26.909 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 63], 00:24:26.909 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 85], 00:24:26.909 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 114], 95.00th=[ 121], 00:24:26.909 | 99.00th=[ 125], 99.50th=[ 130], 99.90th=[ 132], 99.95th=[ 132], 00:24:26.909 | 99.99th=[ 132] 00:24:26.909 bw ( KiB/s): min= 608, max= 1000, per=4.17%, avg=768.80, stdev=133.33, samples=20 00:24:26.909 iops : min= 152, max= 250, avg=192.20, stdev=33.33, samples=20 00:24:26.909 lat (msec) : 50=10.32%, 100=61.46%, 250=28.22% 00:24:26.909 cpu : usr=34.73%, sys=2.02%, ctx=1168, majf=0, minf=10 00:24:26.909 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=82.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:24:26.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 issued rwts: total=1938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.909 filename1: (groupid=0, jobs=1): err= 0: pid=99940: Fri Nov 29 16:59:49 2024 00:24:26.909 read: IOPS=204, BW=818KiB/s (838kB/s)(8184KiB/10005msec) 00:24:26.909 slat (usec): min=4, max=8026, avg=33.28, stdev=395.67 00:24:26.909 clat (msec): min=6, max=129, avg=78.10, stdev=23.41 00:24:26.909 lat (msec): min=6, max=129, avg=78.13, stdev=23.41 00:24:26.909 clat percentiles (msec): 00:24:26.909 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:24:26.909 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 84], 00:24:26.909 | 70.00th=[ 86], 80.00th=[ 108], 90.00th=[ 109], 95.00th=[ 121], 00:24:26.909 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:24:26.909 | 99.99th=[ 130] 00:24:26.909 bw ( KiB/s): min= 664, max= 1080, per=4.43%, avg=816.84, stdev=141.48, samples=19 00:24:26.909 iops : min= 166, max= 270, avg=204.21, stdev=35.37, samples=19 00:24:26.909 lat (msec) : 10=0.64%, 50=15.10%, 100=61.44%, 250=22.83% 00:24:26.909 cpu : usr=31.37%, sys=1.77%, ctx=872, majf=0, minf=9 00:24:26.909 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:26.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 complete : 0=0.0%, 4=86.7%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 issued rwts: total=2046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.909 filename1: (groupid=0, jobs=1): err= 0: pid=99941: Fri Nov 29 16:59:49 2024 00:24:26.909 read: IOPS=199, BW=799KiB/s (818kB/s)(8056KiB/10080msec) 00:24:26.909 slat (usec): min=6, max=4026, avg=17.50, stdev=126.50 00:24:26.909 clat (usec): min=605, max=167830, avg=79821.91, stdev=29551.87 00:24:26.909 lat (usec): min=616, max=167845, avg=79839.42, stdev=29554.15 00:24:26.909 clat percentiles (msec): 00:24:26.909 | 1.00th=[ 3], 5.00th=[ 20], 10.00th=[ 45], 20.00th=[ 56], 00:24:26.909 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 85], 00:24:26.909 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 116], 95.00th=[ 121], 00:24:26.909 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 167], 99.95th=[ 167], 00:24:26.909 | 99.99th=[ 169] 00:24:26.909 bw ( KiB/s): min= 544, max= 1792, per=4.34%, avg=799.20, stdev=275.22, samples=20 00:24:26.909 iops : min= 136, max= 448, avg=199.80, stdev=68.81, samples=20 00:24:26.909 lat (usec) : 750=0.10% 00:24:26.909 lat (msec) : 4=1.49%, 10=0.70%, 20=3.28%, 50=8.64%, 100=53.72% 00:24:26.909 lat (msec) : 250=32.08% 00:24:26.909 cpu : usr=47.27%, sys=2.79%, ctx=1368, majf=0, minf=9 00:24:26.909 IO depths : 1=0.1%, 2=1.5%, 4=5.7%, 8=77.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:26.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 complete : 0=0.0%, 4=88.9%, 8=9.9%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 issued rwts: total=2014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.909 filename1: (groupid=0, jobs=1): err= 0: pid=99942: Fri Nov 29 16:59:49 2024 00:24:26.909 read: IOPS=180, BW=723KiB/s (740kB/s)(7268KiB/10059msec) 00:24:26.909 slat (usec): min=4, max=4030, avg=18.93, stdev=133.22 00:24:26.909 clat (msec): min=14, max=155, avg=88.44, stdev=27.87 00:24:26.909 lat (msec): min=14, max=155, avg=88.46, stdev=27.87 00:24:26.909 clat percentiles (msec): 00:24:26.909 | 1.00th=[ 17], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 64], 00:24:26.909 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 94], 60.00th=[ 101], 00:24:26.909 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 129], 00:24:26.909 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 157], 00:24:26.909 | 99.99th=[ 157] 00:24:26.909 bw ( KiB/s): min= 496, max= 1168, per=3.91%, avg=720.00, stdev=189.68, samples=20 00:24:26.909 iops : min= 124, max= 292, avg=179.95, stdev=47.44, samples=20 00:24:26.909 lat (msec) : 20=1.76%, 50=7.98%, 100=50.74%, 250=39.52% 00:24:26.909 cpu : usr=39.15%, sys=2.16%, ctx=1316, majf=0, minf=9 00:24:26.909 IO depths : 1=0.1%, 2=2.9%, 4=11.8%, 8=70.5%, 16=14.7%, 32=0.0%, >=64=0.0% 00:24:26.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 complete : 0=0.0%, 4=90.7%, 8=6.7%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 issued rwts: total=1817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.909 filename1: (groupid=0, jobs=1): err= 0: pid=99943: Fri Nov 29 16:59:49 2024 00:24:26.909 read: IOPS=173, BW=694KiB/s (711kB/s)(6956KiB/10016msec) 00:24:26.909 slat (usec): min=4, max=8023, avg=28.15, stdev=271.57 00:24:26.909 clat (msec): min=27, max=164, avg=91.91, stdev=22.83 00:24:26.909 lat (msec): min=27, max=164, avg=91.94, stdev=22.83 00:24:26.909 clat percentiles (msec): 00:24:26.909 | 1.00th=[ 41], 5.00th=[ 54], 10.00th=[ 66], 20.00th=[ 72], 00:24:26.909 | 30.00th=[ 79], 40.00th=[ 83], 50.00th=[ 93], 60.00th=[ 104], 00:24:26.909 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 118], 95.00th=[ 123], 00:24:26.909 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 165], 99.95th=[ 165], 00:24:26.909 | 99.99th=[ 165] 00:24:26.909 bw ( KiB/s): min= 528, max= 1005, per=3.76%, avg=691.85, stdev=140.32, samples=20 00:24:26.909 iops : min= 132, max= 251, avg=172.95, stdev=35.05, samples=20 00:24:26.909 lat (msec) : 50=4.08%, 100=50.78%, 250=45.14% 00:24:26.909 cpu : usr=43.33%, sys=2.62%, ctx=1280, majf=0, minf=9 00:24:26.909 IO depths : 1=0.1%, 2=4.7%, 4=18.6%, 8=63.3%, 16=13.4%, 32=0.0%, >=64=0.0% 00:24:26.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 complete : 0=0.0%, 4=92.4%, 8=3.4%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.909 issued rwts: total=1739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.909 filename1: (groupid=0, jobs=1): err= 0: pid=99944: Fri Nov 29 16:59:49 2024 00:24:26.909 read: IOPS=184, BW=739KiB/s (757kB/s)(7392KiB/10003msec) 00:24:26.909 slat (usec): min=4, max=8023, avg=18.16, stdev=186.37 00:24:26.909 clat (msec): min=9, max=168, avg=86.49, stdev=24.52 00:24:26.909 lat (msec): min=9, max=168, avg=86.50, stdev=24.52 00:24:26.909 clat percentiles (msec): 00:24:26.910 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 70], 00:24:26.910 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 96], 00:24:26.910 | 70.00th=[ 108], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 121], 00:24:26.910 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 169], 99.95th=[ 169], 00:24:26.910 | 99.99th=[ 169] 00:24:26.910 bw ( KiB/s): min= 512, max= 1064, per=3.98%, avg=733.89, stdev=173.04, samples=19 00:24:26.910 iops : min= 128, max= 266, avg=183.47, stdev=43.26, samples=19 00:24:26.910 lat (msec) : 10=0.70%, 50=9.36%, 100=53.41%, 250=36.53% 00:24:26.910 cpu : usr=31.25%, sys=1.93%, ctx=869, majf=0, minf=9 00:24:26.910 IO depths : 1=0.1%, 2=2.8%, 4=11.0%, 8=72.0%, 16=14.2%, 32=0.0%, >=64=0.0% 00:24:26.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 complete : 0=0.0%, 4=90.0%, 8=7.6%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.910 filename1: (groupid=0, jobs=1): err= 0: pid=99945: Fri Nov 29 16:59:49 2024 00:24:26.910 read: IOPS=202, BW=808KiB/s (828kB/s)(8136KiB/10065msec) 00:24:26.910 slat (usec): min=3, max=8023, avg=28.62, stdev=270.57 00:24:26.910 clat (msec): min=2, max=163, avg=78.86, stdev=28.29 00:24:26.910 lat (msec): min=2, max=163, avg=78.89, stdev=28.29 00:24:26.910 clat percentiles (msec): 00:24:26.910 | 1.00th=[ 3], 5.00th=[ 23], 10.00th=[ 47], 20.00th=[ 60], 00:24:26.910 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 83], 00:24:26.910 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 115], 95.00th=[ 121], 00:24:26.910 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 161], 99.95th=[ 165], 00:24:26.910 | 99.99th=[ 165] 00:24:26.910 bw ( KiB/s): min= 547, max= 1792, per=4.40%, avg=809.40, stdev=264.71, samples=20 00:24:26.910 iops : min= 136, max= 448, avg=202.30, stdev=66.22, samples=20 00:24:26.910 lat (msec) : 4=1.57%, 10=1.57%, 20=1.67%, 50=9.14%, 100=58.41% 00:24:26.910 lat (msec) : 250=27.63% 00:24:26.910 cpu : usr=41.87%, sys=2.72%, ctx=1490, majf=0, minf=9 00:24:26.910 IO depths : 1=0.1%, 2=0.9%, 4=3.3%, 8=79.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:26.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 issued rwts: total=2034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.910 filename1: (groupid=0, jobs=1): err= 0: pid=99946: Fri Nov 29 16:59:49 2024 00:24:26.910 read: IOPS=183, BW=733KiB/s (751kB/s)(7372KiB/10054msec) 00:24:26.910 slat (usec): min=6, max=6708, avg=31.54, stdev=289.39 00:24:26.910 clat (msec): min=20, max=176, avg=87.09, stdev=28.24 00:24:26.910 lat (msec): min=20, max=176, avg=87.12, stdev=28.25 00:24:26.910 clat percentiles (msec): 00:24:26.910 | 1.00th=[ 25], 5.00th=[ 41], 10.00th=[ 51], 20.00th=[ 62], 00:24:26.910 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 102], 00:24:26.910 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 134], 00:24:26.910 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 176], 99.95th=[ 176], 00:24:26.910 | 99.99th=[ 176] 00:24:26.910 bw ( KiB/s): min= 512, max= 1142, per=3.97%, avg=730.30, stdev=194.77, samples=20 00:24:26.910 iops : min= 128, max= 285, avg=182.55, stdev=48.64, samples=20 00:24:26.910 lat (msec) : 50=9.88%, 100=49.10%, 250=41.02% 00:24:26.910 cpu : usr=40.37%, sys=2.24%, ctx=1273, majf=0, minf=9 00:24:26.910 IO depths : 1=0.1%, 2=2.6%, 4=10.4%, 8=72.2%, 16=14.8%, 32=0.0%, >=64=0.0% 00:24:26.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 complete : 0=0.0%, 4=90.2%, 8=7.5%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 issued rwts: total=1843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.910 filename2: (groupid=0, jobs=1): err= 0: pid=99947: Fri Nov 29 16:59:49 2024 00:24:26.910 read: IOPS=200, BW=803KiB/s (822kB/s)(8044KiB/10021msec) 00:24:26.910 slat (usec): min=3, max=8031, avg=28.78, stdev=321.94 00:24:26.910 clat (msec): min=32, max=123, avg=79.60, stdev=22.25 00:24:26.910 lat (msec): min=32, max=123, avg=79.63, stdev=22.25 00:24:26.910 clat percentiles (msec): 00:24:26.910 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:24:26.910 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:24:26.910 | 70.00th=[ 88], 80.00th=[ 106], 90.00th=[ 111], 95.00th=[ 118], 00:24:26.910 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 125], 99.95th=[ 125], 00:24:26.910 | 99.99th=[ 125] 00:24:26.910 bw ( KiB/s): min= 664, max= 1024, per=4.33%, avg=797.70, stdev=126.08, samples=20 00:24:26.910 iops : min= 166, max= 256, avg=199.40, stdev=31.50, samples=20 00:24:26.910 lat (msec) : 50=13.97%, 100=61.26%, 250=24.76% 00:24:26.910 cpu : usr=36.14%, sys=2.11%, ctx=1067, majf=0, minf=9 00:24:26.910 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=83.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:26.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 issued rwts: total=2011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.910 filename2: (groupid=0, jobs=1): err= 0: pid=99948: Fri Nov 29 16:59:49 2024 00:24:26.910 read: IOPS=201, BW=806KiB/s (825kB/s)(8100KiB/10048msec) 00:24:26.910 slat (usec): min=6, max=4031, avg=20.61, stdev=154.39 00:24:26.910 clat (msec): min=14, max=156, avg=79.14, stdev=24.91 00:24:26.910 lat (msec): min=14, max=156, avg=79.16, stdev=24.90 00:24:26.910 clat percentiles (msec): 00:24:26.910 | 1.00th=[ 19], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 56], 00:24:26.910 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 83], 00:24:26.910 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 113], 95.00th=[ 121], 00:24:26.910 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 157], 00:24:26.910 | 99.99th=[ 157] 00:24:26.910 bw ( KiB/s): min= 584, max= 1309, per=4.37%, avg=805.85, stdev=175.54, samples=20 00:24:26.910 iops : min= 146, max= 327, avg=201.45, stdev=43.85, samples=20 00:24:26.910 lat (msec) : 20=1.09%, 50=13.93%, 100=59.41%, 250=25.58% 00:24:26.910 cpu : usr=40.68%, sys=2.59%, ctx=1148, majf=0, minf=9 00:24:26.910 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:26.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 issued rwts: total=2025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.910 filename2: (groupid=0, jobs=1): err= 0: pid=99949: Fri Nov 29 16:59:49 2024 00:24:26.910 read: IOPS=197, BW=791KiB/s (810kB/s)(7956KiB/10063msec) 00:24:26.910 slat (usec): min=4, max=8032, avg=27.79, stdev=323.76 00:24:26.910 clat (msec): min=8, max=151, avg=80.65, stdev=25.68 00:24:26.910 lat (msec): min=8, max=151, avg=80.68, stdev=25.69 00:24:26.910 clat percentiles (msec): 00:24:26.910 | 1.00th=[ 12], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 63], 00:24:26.910 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 85], 00:24:26.910 | 70.00th=[ 99], 80.00th=[ 107], 90.00th=[ 113], 95.00th=[ 120], 00:24:26.910 | 99.00th=[ 129], 99.50th=[ 129], 99.90th=[ 153], 99.95th=[ 153], 00:24:26.910 | 99.99th=[ 153] 00:24:26.910 bw ( KiB/s): min= 584, max= 1592, per=4.30%, avg=791.00, stdev=218.92, samples=20 00:24:26.910 iops : min= 146, max= 398, avg=197.70, stdev=54.75, samples=20 00:24:26.910 lat (msec) : 10=0.85%, 20=1.91%, 50=9.75%, 100=59.02%, 250=28.46% 00:24:26.910 cpu : usr=36.40%, sys=2.07%, ctx=1040, majf=0, minf=9 00:24:26.910 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:24:26.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 issued rwts: total=1989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.910 filename2: (groupid=0, jobs=1): err= 0: pid=99950: Fri Nov 29 16:59:49 2024 00:24:26.910 read: IOPS=198, BW=792KiB/s (811kB/s)(7952KiB/10038msec) 00:24:26.910 slat (usec): min=6, max=8023, avg=27.90, stdev=310.28 00:24:26.910 clat (msec): min=23, max=143, avg=80.65, stdev=23.54 00:24:26.910 lat (msec): min=23, max=143, avg=80.68, stdev=23.55 00:24:26.910 clat percentiles (msec): 00:24:26.910 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:24:26.910 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:24:26.910 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 114], 95.00th=[ 120], 00:24:26.910 | 99.00th=[ 125], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 144], 00:24:26.910 | 99.99th=[ 144] 00:24:26.910 bw ( KiB/s): min= 640, max= 1080, per=4.28%, avg=788.80, stdev=146.88, samples=20 00:24:26.910 iops : min= 160, max= 270, avg=197.20, stdev=36.72, samples=20 00:24:26.910 lat (msec) : 50=13.13%, 100=59.56%, 250=27.31% 00:24:26.910 cpu : usr=34.98%, sys=2.03%, ctx=1102, majf=0, minf=9 00:24:26.910 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:26.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 issued rwts: total=1988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.910 filename2: (groupid=0, jobs=1): err= 0: pid=99951: Fri Nov 29 16:59:49 2024 00:24:26.910 read: IOPS=183, BW=735KiB/s (753kB/s)(7376KiB/10034msec) 00:24:26.910 slat (usec): min=4, max=8024, avg=21.15, stdev=208.58 00:24:26.910 clat (msec): min=35, max=154, avg=86.92, stdev=24.20 00:24:26.910 lat (msec): min=35, max=154, avg=86.94, stdev=24.20 00:24:26.910 clat percentiles (msec): 00:24:26.910 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 64], 00:24:26.910 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 96], 00:24:26.910 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 121], 00:24:26.910 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 155], 99.95th=[ 155], 00:24:26.910 | 99.99th=[ 155] 00:24:26.910 bw ( KiB/s): min= 400, max= 1024, per=3.97%, avg=731.20, stdev=172.53, samples=20 00:24:26.910 iops : min= 100, max= 256, avg=182.80, stdev=43.13, samples=20 00:24:26.910 lat (msec) : 50=10.41%, 100=54.01%, 250=35.57% 00:24:26.910 cpu : usr=31.14%, sys=2.04%, ctx=880, majf=0, minf=9 00:24:26.910 IO depths : 1=0.1%, 2=2.5%, 4=9.9%, 8=73.0%, 16=14.5%, 32=0.0%, >=64=0.0% 00:24:26.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 complete : 0=0.0%, 4=89.7%, 8=8.1%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.910 issued rwts: total=1844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.911 filename2: (groupid=0, jobs=1): err= 0: pid=99952: Fri Nov 29 16:59:49 2024 00:24:26.911 read: IOPS=193, BW=774KiB/s (792kB/s)(7772KiB/10047msec) 00:24:26.911 slat (usec): min=3, max=8027, avg=20.98, stdev=200.07 00:24:26.911 clat (msec): min=24, max=168, avg=82.60, stdev=23.66 00:24:26.911 lat (msec): min=24, max=168, avg=82.62, stdev=23.66 00:24:26.911 clat percentiles (msec): 00:24:26.911 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 63], 00:24:26.911 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 85], 00:24:26.911 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 120], 00:24:26.911 | 99.00th=[ 128], 99.50th=[ 144], 99.90th=[ 169], 99.95th=[ 169], 00:24:26.911 | 99.99th=[ 169] 00:24:26.911 bw ( KiB/s): min= 528, max= 1008, per=4.18%, avg=770.80, stdev=150.84, samples=20 00:24:26.911 iops : min= 132, max= 252, avg=192.70, stdev=37.71, samples=20 00:24:26.911 lat (msec) : 50=10.29%, 100=58.57%, 250=31.14% 00:24:26.911 cpu : usr=34.69%, sys=2.17%, ctx=1084, majf=0, minf=9 00:24:26.911 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=78.2%, 16=15.3%, 32=0.0%, >=64=0.0% 00:24:26.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.911 complete : 0=0.0%, 4=88.4%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.911 issued rwts: total=1943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.911 filename2: (groupid=0, jobs=1): err= 0: pid=99953: Fri Nov 29 16:59:49 2024 00:24:26.911 read: IOPS=196, BW=787KiB/s (806kB/s)(7904KiB/10045msec) 00:24:26.911 slat (usec): min=4, max=8025, avg=26.04, stdev=312.00 00:24:26.911 clat (msec): min=25, max=146, avg=81.19, stdev=23.60 00:24:26.911 lat (msec): min=25, max=146, avg=81.21, stdev=23.60 00:24:26.911 clat percentiles (msec): 00:24:26.911 | 1.00th=[ 32], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:24:26.911 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 85], 00:24:26.911 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 121], 00:24:26.911 | 99.00th=[ 125], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 146], 00:24:26.911 | 99.99th=[ 146] 00:24:26.911 bw ( KiB/s): min= 616, max= 1168, per=4.26%, avg=784.00, stdev=140.76, samples=20 00:24:26.911 iops : min= 154, max= 292, avg=196.00, stdev=35.19, samples=20 00:24:26.911 lat (msec) : 50=13.01%, 100=61.54%, 250=25.46% 00:24:26.911 cpu : usr=31.16%, sys=1.93%, ctx=884, majf=0, minf=9 00:24:26.911 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.4%, 16=16.3%, 32=0.0%, >=64=0.0% 00:24:26.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.911 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.911 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.911 filename2: (groupid=0, jobs=1): err= 0: pid=99954: Fri Nov 29 16:59:49 2024 00:24:26.911 read: IOPS=197, BW=789KiB/s (808kB/s)(7932KiB/10055msec) 00:24:26.911 slat (usec): min=3, max=12023, avg=19.78, stdev=269.73 00:24:26.911 clat (msec): min=7, max=160, avg=80.89, stdev=25.77 00:24:26.911 lat (msec): min=7, max=160, avg=80.91, stdev=25.76 00:24:26.911 clat percentiles (msec): 00:24:26.911 | 1.00th=[ 13], 5.00th=[ 42], 10.00th=[ 50], 20.00th=[ 60], 00:24:26.911 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 84], 00:24:26.911 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 115], 95.00th=[ 121], 00:24:26.911 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 161], 00:24:26.911 | 99.99th=[ 161] 00:24:26.911 bw ( KiB/s): min= 563, max= 1264, per=4.28%, avg=788.60, stdev=176.63, samples=20 00:24:26.911 iops : min= 140, max= 316, avg=197.10, stdev=44.20, samples=20 00:24:26.911 lat (msec) : 10=0.81%, 20=1.61%, 50=8.22%, 100=60.82%, 250=28.54% 00:24:26.911 cpu : usr=42.94%, sys=2.51%, ctx=1376, majf=0, minf=9 00:24:26.911 IO depths : 1=0.1%, 2=0.7%, 4=2.4%, 8=80.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:26.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.911 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.911 issued rwts: total=1983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:26.911 00:24:26.911 Run status group 0 (all jobs): 00:24:26.911 READ: bw=18.0MiB/s (18.8MB/s), 694KiB/s-818KiB/s (711kB/s-838kB/s), io=181MiB (190MB), run=10003-10080msec 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.911 bdev_null0 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.911 [2024-11-29 16:59:49.243934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.911 bdev_null1 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.911 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:26.912 { 00:24:26.912 "params": { 00:24:26.912 "name": "Nvme$subsystem", 00:24:26.912 "trtype": "$TEST_TRANSPORT", 00:24:26.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.912 "adrfam": "ipv4", 00:24:26.912 "trsvcid": "$NVMF_PORT", 00:24:26.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.912 "hdgst": ${hdgst:-false}, 00:24:26.912 "ddgst": ${ddgst:-false} 00:24:26.912 }, 00:24:26.912 "method": "bdev_nvme_attach_controller" 00:24:26.912 } 00:24:26.912 EOF 00:24:26.912 )") 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:26.912 { 00:24:26.912 "params": { 00:24:26.912 "name": "Nvme$subsystem", 00:24:26.912 "trtype": "$TEST_TRANSPORT", 00:24:26.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.912 "adrfam": "ipv4", 00:24:26.912 "trsvcid": "$NVMF_PORT", 00:24:26.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.912 "hdgst": ${hdgst:-false}, 00:24:26.912 "ddgst": ${ddgst:-false} 00:24:26.912 }, 00:24:26.912 "method": "bdev_nvme_attach_controller" 00:24:26.912 } 00:24:26.912 EOF 00:24:26.912 )") 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:26.912 "params": { 00:24:26.912 "name": "Nvme0", 00:24:26.912 "trtype": "tcp", 00:24:26.912 "traddr": "10.0.0.3", 00:24:26.912 "adrfam": "ipv4", 00:24:26.912 "trsvcid": "4420", 00:24:26.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:26.912 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:26.912 "hdgst": false, 00:24:26.912 "ddgst": false 00:24:26.912 }, 00:24:26.912 "method": "bdev_nvme_attach_controller" 00:24:26.912 },{ 00:24:26.912 "params": { 00:24:26.912 "name": "Nvme1", 00:24:26.912 "trtype": "tcp", 00:24:26.912 "traddr": "10.0.0.3", 00:24:26.912 "adrfam": "ipv4", 00:24:26.912 "trsvcid": "4420", 00:24:26.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:26.912 "hdgst": false, 00:24:26.912 "ddgst": false 00:24:26.912 }, 00:24:26.912 "method": "bdev_nvme_attach_controller" 00:24:26.912 }' 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:26.912 16:59:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:26.912 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:26.912 ... 00:24:26.912 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:26.912 ... 00:24:26.912 fio-3.35 00:24:26.912 Starting 4 threads 00:24:32.185 00:24:32.185 filename0: (groupid=0, jobs=1): err= 0: pid=100095: Fri Nov 29 16:59:55 2024 00:24:32.185 read: IOPS=1987, BW=15.5MiB/s (16.3MB/s)(77.6MiB/5001msec) 00:24:32.185 slat (usec): min=7, max=313, avg=15.43, stdev= 6.15 00:24:32.185 clat (usec): min=1432, max=9547, avg=3968.80, stdev=451.20 00:24:32.185 lat (usec): min=1445, max=9561, avg=3984.23, stdev=451.01 00:24:32.185 clat percentiles (usec): 00:24:32.185 | 1.00th=[ 2933], 5.00th=[ 3163], 10.00th=[ 3458], 20.00th=[ 3884], 00:24:32.185 | 30.00th=[ 3916], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 4015], 00:24:32.185 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4555], 00:24:32.185 | 99.00th=[ 5080], 99.50th=[ 5866], 99.90th=[ 7242], 99.95th=[ 7242], 00:24:32.185 | 99.99th=[ 9503] 00:24:32.185 bw ( KiB/s): min=15488, max=17056, per=23.18%, avg=15902.22, stdev=521.56, samples=9 00:24:32.185 iops : min= 1936, max= 2132, avg=1987.78, stdev=65.20, samples=9 00:24:32.185 lat (msec) : 2=0.66%, 4=55.73%, 10=43.60% 00:24:32.185 cpu : usr=92.14%, sys=6.70%, ctx=138, majf=0, minf=9 00:24:32.185 IO depths : 1=0.1%, 2=20.8%, 4=53.5%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:32.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.185 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.185 issued rwts: total=9937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:32.185 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:32.185 filename0: (groupid=0, jobs=1): err= 0: pid=100096: Fri Nov 29 16:59:55 2024 00:24:32.185 read: IOPS=1987, BW=15.5MiB/s (16.3MB/s)(77.6MiB/5001msec) 00:24:32.185 slat (nsec): min=3292, max=62340, avg=14716.09, stdev=5286.28 00:24:32.185 clat (usec): min=1448, max=9498, avg=3972.52, stdev=449.61 00:24:32.185 lat (usec): min=1461, max=9511, avg=3987.24, stdev=449.83 00:24:32.185 clat percentiles (usec): 00:24:32.185 | 1.00th=[ 2933], 5.00th=[ 3163], 10.00th=[ 3458], 20.00th=[ 3884], 00:24:32.185 | 30.00th=[ 3916], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 4015], 00:24:32.185 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4555], 00:24:32.185 | 99.00th=[ 5145], 99.50th=[ 5473], 99.90th=[ 7242], 99.95th=[ 7308], 00:24:32.185 | 99.99th=[ 9503] 00:24:32.185 bw ( KiB/s): min=15488, max=17056, per=23.19%, avg=15905.89, stdev=526.35, samples=9 00:24:32.185 iops : min= 1936, max= 2132, avg=1988.22, stdev=65.78, samples=9 00:24:32.185 lat (msec) : 2=0.66%, 4=53.98%, 10=45.36% 00:24:32.185 cpu : usr=91.86%, sys=7.28%, ctx=5, majf=0, minf=9 00:24:32.185 IO depths : 1=0.1%, 2=20.8%, 4=53.5%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:32.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.185 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.185 issued rwts: total=9937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:32.185 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:32.185 filename1: (groupid=0, jobs=1): err= 0: pid=100097: Fri Nov 29 16:59:55 2024 00:24:32.185 read: IOPS=1995, BW=15.6MiB/s (16.3MB/s)(78.0MiB/5002msec) 00:24:32.185 slat (nsec): min=7124, max=63518, avg=15358.10, stdev=5228.73 00:24:32.185 clat (usec): min=1421, max=6355, avg=3953.34, stdev=413.98 00:24:32.185 lat (usec): min=1434, max=6362, avg=3968.69, stdev=414.01 00:24:32.185 clat percentiles (usec): 00:24:32.185 | 1.00th=[ 2737], 5.00th=[ 3163], 10.00th=[ 3425], 20.00th=[ 3884], 00:24:32.185 | 30.00th=[ 3916], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 4015], 00:24:32.185 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4555], 00:24:32.185 | 99.00th=[ 5014], 99.50th=[ 5211], 99.90th=[ 5866], 99.95th=[ 5932], 00:24:32.185 | 99.99th=[ 6325] 00:24:32.185 bw ( KiB/s): min=15488, max=17168, per=23.29%, avg=15976.89, stdev=649.58, samples=9 00:24:32.185 iops : min= 1936, max= 2146, avg=1997.11, stdev=81.20, samples=9 00:24:32.185 lat (msec) : 2=0.68%, 4=55.43%, 10=43.89% 00:24:32.185 cpu : usr=91.42%, sys=7.76%, ctx=6, majf=0, minf=0 00:24:32.185 IO depths : 1=0.1%, 2=20.5%, 4=53.7%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:32.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.185 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.185 issued rwts: total=9979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:32.185 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:32.185 filename1: (groupid=0, jobs=1): err= 0: pid=100098: Fri Nov 29 16:59:55 2024 00:24:32.185 read: IOPS=2607, BW=20.4MiB/s (21.4MB/s)(102MiB/5003msec) 00:24:32.185 slat (nsec): min=6697, max=54169, avg=9469.84, stdev=3809.64 00:24:32.185 clat (usec): min=578, max=6562, avg=3044.47, stdev=1070.37 00:24:32.185 lat (usec): min=585, max=6575, avg=3053.94, stdev=1070.16 00:24:32.185 clat percentiles (usec): 00:24:32.185 | 1.00th=[ 1254], 5.00th=[ 1287], 10.00th=[ 1319], 20.00th=[ 1418], 00:24:32.185 | 30.00th=[ 2802], 40.00th=[ 3032], 50.00th=[ 3228], 60.00th=[ 3687], 00:24:32.185 | 70.00th=[ 3851], 80.00th=[ 3916], 90.00th=[ 4047], 95.00th=[ 4293], 00:24:32.185 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 5669], 99.95th=[ 5932], 00:24:32.185 | 99.99th=[ 6325] 00:24:32.185 bw ( KiB/s): min=17408, max=21840, per=30.29%, avg=20776.89, stdev=1618.28, samples=9 00:24:32.185 iops : min= 2176, max= 2730, avg=2597.11, stdev=202.28, samples=9 00:24:32.185 lat (usec) : 750=0.06%, 1000=0.19% 00:24:32.185 lat (msec) : 2=23.81%, 4=63.57%, 10=12.36% 00:24:32.185 cpu : usr=90.98%, sys=8.00%, ctx=7, majf=0, minf=0 00:24:32.185 IO depths : 1=0.1%, 2=0.3%, 4=64.5%, 8=35.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:32.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.185 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.185 issued rwts: total=13043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:32.185 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:32.185 00:24:32.185 Run status group 0 (all jobs): 00:24:32.185 READ: bw=67.0MiB/s (70.2MB/s), 15.5MiB/s-20.4MiB/s (16.3MB/s-21.4MB/s), io=335MiB (351MB), run=5001-5003msec 00:24:32.185 16:59:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:32.185 16:59:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:32.185 16:59:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:32.185 16:59:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:32.185 16:59:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:32.185 16:59:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:32.185 16:59:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.185 16:59:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:32.186 ************************************ 00:24:32.186 END TEST fio_dif_rand_params 00:24:32.186 ************************************ 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.186 00:24:32.186 real 0m23.108s 00:24:32.186 user 2m3.187s 00:24:32.186 sys 0m8.728s 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.186 16:59:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:32.186 16:59:55 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:32.186 16:59:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:32.186 16:59:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.186 16:59:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:32.186 ************************************ 00:24:32.186 START TEST fio_dif_digest 00:24:32.186 ************************************ 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:32.186 bdev_null0 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:32.186 [2024-11-29 16:59:55.288388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:32.186 { 00:24:32.186 "params": { 00:24:32.186 "name": "Nvme$subsystem", 00:24:32.186 "trtype": "$TEST_TRANSPORT", 00:24:32.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.186 "adrfam": "ipv4", 00:24:32.186 "trsvcid": "$NVMF_PORT", 00:24:32.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.186 "hdgst": ${hdgst:-false}, 00:24:32.186 "ddgst": ${ddgst:-false} 00:24:32.186 }, 00:24:32.186 "method": "bdev_nvme_attach_controller" 00:24:32.186 } 00:24:32.186 EOF 00:24:32.186 )") 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:32.186 "params": { 00:24:32.186 "name": "Nvme0", 00:24:32.186 "trtype": "tcp", 00:24:32.186 "traddr": "10.0.0.3", 00:24:32.186 "adrfam": "ipv4", 00:24:32.186 "trsvcid": "4420", 00:24:32.186 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:32.186 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:32.186 "hdgst": true, 00:24:32.186 "ddgst": true 00:24:32.186 }, 00:24:32.186 "method": "bdev_nvme_attach_controller" 00:24:32.186 }' 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:32.186 16:59:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:32.186 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:32.186 ... 00:24:32.186 fio-3.35 00:24:32.186 Starting 3 threads 00:24:44.395 00:24:44.395 filename0: (groupid=0, jobs=1): err= 0: pid=100204: Fri Nov 29 17:00:05 2024 00:24:44.395 read: IOPS=235, BW=29.4MiB/s (30.8MB/s)(294MiB/10006msec) 00:24:44.395 slat (nsec): min=7019, max=43187, avg=10102.46, stdev=4096.18 00:24:44.395 clat (usec): min=11735, max=14903, avg=12721.28, stdev=564.11 00:24:44.395 lat (usec): min=11744, max=14915, avg=12731.38, stdev=564.45 00:24:44.395 clat percentiles (usec): 00:24:44.395 | 1.00th=[11994], 5.00th=[12125], 10.00th=[12125], 20.00th=[12256], 00:24:44.395 | 30.00th=[12387], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:24:44.395 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13960], 00:24:44.395 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14877], 99.95th=[14877], 00:24:44.395 | 99.99th=[14877] 00:24:44.395 bw ( KiB/s): min=28416, max=31488, per=33.32%, avg=30113.68, stdev=792.32, samples=19 00:24:44.395 iops : min= 222, max= 246, avg=235.26, stdev= 6.19, samples=19 00:24:44.395 lat (msec) : 20=100.00% 00:24:44.395 cpu : usr=91.88%, sys=7.57%, ctx=32, majf=0, minf=0 00:24:44.395 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:44.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.395 issued rwts: total=2355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.395 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:44.395 filename0: (groupid=0, jobs=1): err= 0: pid=100205: Fri Nov 29 17:00:05 2024 00:24:44.395 read: IOPS=235, BW=29.4MiB/s (30.9MB/s)(294MiB/10004msec) 00:24:44.395 slat (nsec): min=6994, max=42379, avg=9867.97, stdev=3839.62 00:24:44.395 clat (usec): min=9510, max=14718, avg=12718.74, stdev=569.96 00:24:44.395 lat (usec): min=9517, max=14730, avg=12728.61, stdev=570.15 00:24:44.395 clat percentiles (usec): 00:24:44.395 | 1.00th=[11994], 5.00th=[12125], 10.00th=[12125], 20.00th=[12256], 00:24:44.395 | 30.00th=[12387], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:24:44.395 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13960], 00:24:44.396 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14746], 99.95th=[14746], 00:24:44.396 | 99.99th=[14746] 00:24:44.396 bw ( KiB/s): min=28416, max=31488, per=33.32%, avg=30113.68, stdev=792.32, samples=19 00:24:44.396 iops : min= 222, max= 246, avg=235.26, stdev= 6.19, samples=19 00:24:44.396 lat (msec) : 10=0.13%, 20=99.87% 00:24:44.396 cpu : usr=91.70%, sys=7.76%, ctx=16, majf=0, minf=0 00:24:44.396 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:44.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.396 issued rwts: total=2355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.396 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:44.396 filename0: (groupid=0, jobs=1): err= 0: pid=100206: Fri Nov 29 17:00:05 2024 00:24:44.396 read: IOPS=235, BW=29.4MiB/s (30.9MB/s)(294MiB/10004msec) 00:24:44.396 slat (usec): min=6, max=158, avg=10.10, stdev= 5.38 00:24:44.396 clat (usec): min=7950, max=14785, avg=12717.90, stdev=585.95 00:24:44.396 lat (usec): min=7957, max=14797, avg=12728.00, stdev=586.74 00:24:44.396 clat percentiles (usec): 00:24:44.396 | 1.00th=[11994], 5.00th=[12125], 10.00th=[12125], 20.00th=[12256], 00:24:44.396 | 30.00th=[12387], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:24:44.396 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13960], 00:24:44.396 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14746], 99.95th=[14746], 00:24:44.396 | 99.99th=[14746] 00:24:44.396 bw ( KiB/s): min=28416, max=31488, per=33.32%, avg=30113.68, stdev=871.11, samples=19 00:24:44.396 iops : min= 222, max= 246, avg=235.26, stdev= 6.81, samples=19 00:24:44.396 lat (msec) : 10=0.13%, 20=99.87% 00:24:44.396 cpu : usr=90.37%, sys=8.79%, ctx=231, majf=0, minf=0 00:24:44.396 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:44.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.396 issued rwts: total=2355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.396 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:44.396 00:24:44.396 Run status group 0 (all jobs): 00:24:44.396 READ: bw=88.3MiB/s (92.5MB/s), 29.4MiB/s-29.4MiB/s (30.8MB/s-30.9MB/s), io=883MiB (926MB), run=10004-10006msec 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.396 00:24:44.396 real 0m10.877s 00:24:44.396 user 0m27.967s 00:24:44.396 sys 0m2.646s 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.396 17:00:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:44.396 ************************************ 00:24:44.396 END TEST fio_dif_digest 00:24:44.396 ************************************ 00:24:44.396 17:00:06 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:44.396 17:00:06 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:44.396 rmmod nvme_tcp 00:24:44.396 rmmod nvme_fabrics 00:24:44.396 rmmod nvme_keyring 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 99461 ']' 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 99461 00:24:44.396 17:00:06 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 99461 ']' 00:24:44.396 17:00:06 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 99461 00:24:44.396 17:00:06 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:24:44.396 17:00:06 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:44.396 17:00:06 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99461 00:24:44.396 17:00:06 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:44.396 17:00:06 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:44.396 killing process with pid 99461 00:24:44.396 17:00:06 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99461' 00:24:44.396 17:00:06 nvmf_dif -- common/autotest_common.sh@973 -- # kill 99461 00:24:44.396 17:00:06 nvmf_dif -- common/autotest_common.sh@978 -- # wait 99461 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:44.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:44.396 Waiting for block devices as requested 00:24:44.396 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:44.396 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:44.396 17:00:06 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:44.396 17:00:07 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:44.396 17:00:07 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:44.396 17:00:07 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:44.396 17:00:07 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:44.396 17:00:07 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:44.396 17:00:07 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:44.396 17:00:07 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:44.396 17:00:07 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:44.396 17:00:07 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:44.396 17:00:07 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:44.396 17:00:07 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:44.396 17:00:07 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.396 17:00:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:44.396 17:00:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.396 17:00:07 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:24:44.396 00:24:44.396 real 0m58.573s 00:24:44.396 user 3m45.471s 00:24:44.396 sys 0m19.751s 00:24:44.396 17:00:07 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.396 17:00:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:44.396 ************************************ 00:24:44.396 END TEST nvmf_dif 00:24:44.396 ************************************ 00:24:44.396 17:00:07 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:44.396 17:00:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:44.396 17:00:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.396 17:00:07 -- common/autotest_common.sh@10 -- # set +x 00:24:44.396 ************************************ 00:24:44.396 START TEST nvmf_abort_qd_sizes 00:24:44.396 ************************************ 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:44.396 * Looking for test storage... 00:24:44.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:24:44.396 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:44.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.397 --rc genhtml_branch_coverage=1 00:24:44.397 --rc genhtml_function_coverage=1 00:24:44.397 --rc genhtml_legend=1 00:24:44.397 --rc geninfo_all_blocks=1 00:24:44.397 --rc geninfo_unexecuted_blocks=1 00:24:44.397 00:24:44.397 ' 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:44.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.397 --rc genhtml_branch_coverage=1 00:24:44.397 --rc genhtml_function_coverage=1 00:24:44.397 --rc genhtml_legend=1 00:24:44.397 --rc geninfo_all_blocks=1 00:24:44.397 --rc geninfo_unexecuted_blocks=1 00:24:44.397 00:24:44.397 ' 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:44.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.397 --rc genhtml_branch_coverage=1 00:24:44.397 --rc genhtml_function_coverage=1 00:24:44.397 --rc genhtml_legend=1 00:24:44.397 --rc geninfo_all_blocks=1 00:24:44.397 --rc geninfo_unexecuted_blocks=1 00:24:44.397 00:24:44.397 ' 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:44.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.397 --rc genhtml_branch_coverage=1 00:24:44.397 --rc genhtml_function_coverage=1 00:24:44.397 --rc genhtml_legend=1 00:24:44.397 --rc geninfo_all_blocks=1 00:24:44.397 --rc geninfo_unexecuted_blocks=1 00:24:44.397 00:24:44.397 ' 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:44.397 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:44.397 Cannot find device "nvmf_init_br" 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:44.397 Cannot find device "nvmf_init_br2" 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:44.397 Cannot find device "nvmf_tgt_br" 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:44.397 Cannot find device "nvmf_tgt_br2" 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:24:44.397 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:44.398 Cannot find device "nvmf_init_br" 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:44.398 Cannot find device "nvmf_init_br2" 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:44.398 Cannot find device "nvmf_tgt_br" 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:44.398 Cannot find device "nvmf_tgt_br2" 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:44.398 Cannot find device "nvmf_br" 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:44.398 Cannot find device "nvmf_init_if" 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:44.398 Cannot find device "nvmf_init_if2" 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:44.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:44.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:44.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:44.398 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:24:44.398 00:24:44.398 --- 10.0.0.3 ping statistics --- 00:24:44.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.398 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:44.398 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:44.398 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:24:44.398 00:24:44.398 --- 10.0.0.4 ping statistics --- 00:24:44.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.398 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:44.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:24:44.398 00:24:44.398 --- 10.0.0.1 ping statistics --- 00:24:44.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.398 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:44.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:24:44.398 00:24:44.398 --- 10.0.0.2 ping statistics --- 00:24:44.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.398 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:24:44.398 17:00:07 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:44.964 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:44.964 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:44.964 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:44.964 17:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.964 17:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:44.964 17:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:44.964 17:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.964 17:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:44.964 17:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:45.222 17:00:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:45.222 17:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:45.222 17:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:45.222 17:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:45.222 17:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=100849 00:24:45.222 17:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 100849 00:24:45.222 17:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 100849 ']' 00:24:45.222 17:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.222 17:00:08 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:45.222 17:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.222 17:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.222 17:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.222 17:00:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:45.222 [2024-11-29 17:00:08.848763] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:24:45.222 [2024-11-29 17:00:08.848853] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.222 [2024-11-29 17:00:08.976889] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:45.222 [2024-11-29 17:00:09.008468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:45.480 [2024-11-29 17:00:09.036358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.480 [2024-11-29 17:00:09.036425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.480 [2024-11-29 17:00:09.036440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.480 [2024-11-29 17:00:09.036450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.480 [2024-11-29 17:00:09.036459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.480 [2024-11-29 17:00:09.037434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.480 [2024-11-29 17:00:09.037592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.480 [2024-11-29 17:00:09.038219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.481 [2024-11-29 17:00:09.038256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.481 [2024-11-29 17:00:09.077086] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.481 17:00:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:45.481 ************************************ 00:24:45.481 START TEST spdk_target_abort 00:24:45.481 ************************************ 00:24:45.481 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:24:45.481 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:45.481 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:45.481 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.481 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:45.739 spdk_targetn1 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:45.739 [2024-11-29 17:00:09.283631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:45.739 [2024-11-29 17:00:09.322865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:45.739 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:45.740 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:45.740 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:45.740 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:45.740 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:45.740 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:45.740 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:45.740 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:45.740 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:24:45.740 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:45.740 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:45.740 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:45.740 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:45.740 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:45.740 17:00:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:49.026 Initializing NVMe Controllers 00:24:49.026 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:49.026 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:49.026 Initialization complete. Launching workers. 00:24:49.026 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9683, failed: 0 00:24:49.026 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1074, failed to submit 8609 00:24:49.026 success 892, unsuccessful 182, failed 0 00:24:49.026 17:00:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:49.026 17:00:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:52.442 Initializing NVMe Controllers 00:24:52.442 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:52.442 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:52.442 Initialization complete. Launching workers. 00:24:52.442 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8920, failed: 0 00:24:52.442 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1157, failed to submit 7763 00:24:52.442 success 408, unsuccessful 749, failed 0 00:24:52.442 17:00:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:52.442 17:00:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:55.729 Initializing NVMe Controllers 00:24:55.729 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:55.729 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:55.729 Initialization complete. Launching workers. 00:24:55.729 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31667, failed: 0 00:24:55.729 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2375, failed to submit 29292 00:24:55.729 success 403, unsuccessful 1972, failed 0 00:24:55.729 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:55.729 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.729 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:55.729 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.729 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:55.729 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.729 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:55.729 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.729 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 100849 00:24:55.729 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 100849 ']' 00:24:55.729 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 100849 00:24:55.729 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:24:55.729 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.729 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100849 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:55.989 killing process with pid 100849 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100849' 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 100849 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 100849 00:24:55.989 00:24:55.989 real 0m10.442s 00:24:55.989 user 0m40.038s 00:24:55.989 sys 0m2.098s 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:55.989 ************************************ 00:24:55.989 END TEST spdk_target_abort 00:24:55.989 ************************************ 00:24:55.989 17:00:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:55.989 17:00:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:55.989 17:00:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:55.989 17:00:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:55.989 ************************************ 00:24:55.989 START TEST kernel_target_abort 00:24:55.989 ************************************ 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:55.989 17:00:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:56.557 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:56.557 Waiting for block devices as requested 00:24:56.557 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:56.557 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:56.557 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:56.557 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:56.557 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:56.557 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:56.557 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:56.557 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:56.557 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:56.557 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:56.557 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:56.817 No valid GPT data, bailing 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:56.817 No valid GPT data, bailing 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:56.817 No valid GPT data, bailing 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:56.817 No valid GPT data, bailing 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:24:56.817 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b --hostid=ecede086-b106-482f-ba49-ce4e74dc3f2b -a 10.0.0.1 -t tcp -s 4420 00:24:57.076 00:24:57.076 Discovery Log Number of Records 2, Generation counter 2 00:24:57.076 =====Discovery Log Entry 0====== 00:24:57.076 trtype: tcp 00:24:57.076 adrfam: ipv4 00:24:57.076 subtype: current discovery subsystem 00:24:57.076 treq: not specified, sq flow control disable supported 00:24:57.076 portid: 1 00:24:57.076 trsvcid: 4420 00:24:57.076 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:57.076 traddr: 10.0.0.1 00:24:57.076 eflags: none 00:24:57.076 sectype: none 00:24:57.076 =====Discovery Log Entry 1====== 00:24:57.076 trtype: tcp 00:24:57.076 adrfam: ipv4 00:24:57.076 subtype: nvme subsystem 00:24:57.076 treq: not specified, sq flow control disable supported 00:24:57.076 portid: 1 00:24:57.076 trsvcid: 4420 00:24:57.076 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:57.076 traddr: 10.0.0.1 00:24:57.076 eflags: none 00:24:57.076 sectype: none 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:57.076 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:57.077 17:00:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:00.361 Initializing NVMe Controllers 00:25:00.361 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:00.361 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:00.361 Initialization complete. Launching workers. 00:25:00.361 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32901, failed: 0 00:25:00.361 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32901, failed to submit 0 00:25:00.361 success 0, unsuccessful 32901, failed 0 00:25:00.361 17:00:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:00.361 17:00:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:03.650 Initializing NVMe Controllers 00:25:03.650 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:03.650 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:03.650 Initialization complete. Launching workers. 00:25:03.650 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61909, failed: 0 00:25:03.650 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25022, failed to submit 36887 00:25:03.650 success 0, unsuccessful 25022, failed 0 00:25:03.650 17:00:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:03.650 17:00:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:06.941 Initializing NVMe Controllers 00:25:06.941 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:06.941 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:06.941 Initialization complete. Launching workers. 00:25:06.941 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67421, failed: 0 00:25:06.941 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16834, failed to submit 50587 00:25:06.941 success 0, unsuccessful 16834, failed 0 00:25:06.941 17:00:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:25:06.941 17:00:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:06.941 17:00:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:25:06.941 17:00:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:06.941 17:00:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:06.941 17:00:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:06.941 17:00:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:06.941 17:00:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:06.941 17:00:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:06.941 17:00:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:07.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:07.768 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:08.028 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:08.028 00:25:08.028 real 0m11.952s 00:25:08.028 user 0m5.727s 00:25:08.028 sys 0m3.538s 00:25:08.028 17:00:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.028 17:00:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:08.028 ************************************ 00:25:08.028 END TEST kernel_target_abort 00:25:08.028 ************************************ 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:08.028 rmmod nvme_tcp 00:25:08.028 rmmod nvme_fabrics 00:25:08.028 rmmod nvme_keyring 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 100849 ']' 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 100849 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 100849 ']' 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 100849 00:25:08.028 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (100849) - No such process 00:25:08.028 Process with pid 100849 is not found 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 100849 is not found' 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:25:08.028 17:00:31 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:08.597 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:08.597 Waiting for block devices as requested 00:25:08.597 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:08.597 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:08.597 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:08.597 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:08.597 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:25:08.597 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:25:08.597 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:08.597 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.856 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:08.857 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.857 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:25:08.857 00:25:08.857 real 0m25.359s 00:25:08.857 user 0m46.914s 00:25:08.857 sys 0m7.072s 00:25:08.857 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.857 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:08.857 ************************************ 00:25:08.857 END TEST nvmf_abort_qd_sizes 00:25:08.857 ************************************ 00:25:09.117 17:00:32 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:09.117 17:00:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:09.117 17:00:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:09.117 17:00:32 -- common/autotest_common.sh@10 -- # set +x 00:25:09.117 ************************************ 00:25:09.117 START TEST keyring_file 00:25:09.117 ************************************ 00:25:09.117 17:00:32 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:09.117 * Looking for test storage... 00:25:09.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:09.117 17:00:32 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:09.117 17:00:32 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:25:09.117 17:00:32 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:09.117 17:00:32 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@345 -- # : 1 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@353 -- # local d=1 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@355 -- # echo 1 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@353 -- # local d=2 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@355 -- # echo 2 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.117 17:00:32 keyring_file -- scripts/common.sh@368 -- # return 0 00:25:09.118 17:00:32 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.118 17:00:32 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:09.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.118 --rc genhtml_branch_coverage=1 00:25:09.118 --rc genhtml_function_coverage=1 00:25:09.118 --rc genhtml_legend=1 00:25:09.118 --rc geninfo_all_blocks=1 00:25:09.118 --rc geninfo_unexecuted_blocks=1 00:25:09.118 00:25:09.118 ' 00:25:09.118 17:00:32 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:09.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.118 --rc genhtml_branch_coverage=1 00:25:09.118 --rc genhtml_function_coverage=1 00:25:09.118 --rc genhtml_legend=1 00:25:09.118 --rc geninfo_all_blocks=1 00:25:09.118 --rc geninfo_unexecuted_blocks=1 00:25:09.118 00:25:09.118 ' 00:25:09.118 17:00:32 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:09.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.118 --rc genhtml_branch_coverage=1 00:25:09.118 --rc genhtml_function_coverage=1 00:25:09.118 --rc genhtml_legend=1 00:25:09.118 --rc geninfo_all_blocks=1 00:25:09.118 --rc geninfo_unexecuted_blocks=1 00:25:09.118 00:25:09.118 ' 00:25:09.118 17:00:32 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:09.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.118 --rc genhtml_branch_coverage=1 00:25:09.118 --rc genhtml_function_coverage=1 00:25:09.118 --rc genhtml_legend=1 00:25:09.118 --rc geninfo_all_blocks=1 00:25:09.118 --rc geninfo_unexecuted_blocks=1 00:25:09.118 00:25:09.118 ' 00:25:09.118 17:00:32 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:09.118 17:00:32 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:09.118 17:00:32 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.118 17:00:32 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.118 17:00:32 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.118 17:00:32 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.118 17:00:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.118 17:00:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.118 17:00:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.118 17:00:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:25:09.118 17:00:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@51 -- # : 0 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:09.118 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.118 17:00:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:09.118 17:00:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:09.118 17:00:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:09.118 17:00:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:25:09.118 17:00:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:25:09.118 17:00:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:25:09.118 17:00:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:09.118 17:00:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:09.118 17:00:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:09.118 17:00:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:09.118 17:00:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:09.118 17:00:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:09.118 17:00:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xBkN76vSMt 00:25:09.118 17:00:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:25:09.118 17:00:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:25:09.378 17:00:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xBkN76vSMt 00:25:09.378 17:00:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xBkN76vSMt 00:25:09.378 17:00:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.xBkN76vSMt 00:25:09.378 17:00:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:25:09.378 17:00:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:09.378 17:00:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:25:09.378 17:00:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:09.378 17:00:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:09.378 17:00:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:09.378 17:00:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BuLnXbNtAr 00:25:09.378 17:00:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:09.378 17:00:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:09.378 17:00:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:25:09.378 17:00:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:09.378 17:00:32 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:25:09.378 17:00:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:25:09.378 17:00:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:25:09.378 17:00:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BuLnXbNtAr 00:25:09.378 17:00:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BuLnXbNtAr 00:25:09.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.378 17:00:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BuLnXbNtAr 00:25:09.378 17:00:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=101746 00:25:09.378 17:00:32 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:09.378 17:00:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 101746 00:25:09.378 17:00:32 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 101746 ']' 00:25:09.378 17:00:32 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.378 17:00:32 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.378 17:00:32 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.378 17:00:32 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.378 17:00:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:09.378 [2024-11-29 17:00:33.073282] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:09.378 [2024-11-29 17:00:33.073631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101746 ] 00:25:09.637 [2024-11-29 17:00:33.200037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:09.637 [2024-11-29 17:00:33.225254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.637 [2024-11-29 17:00:33.244303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.637 [2024-11-29 17:00:33.277293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:09.637 17:00:33 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.637 17:00:33 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:25:09.637 17:00:33 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:25:09.637 17:00:33 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.637 17:00:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:09.637 [2024-11-29 17:00:33.397105] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.637 null0 00:25:09.897 [2024-11-29 17:00:33.429090] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:09.897 [2024-11-29 17:00:33.429458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.897 17:00:33 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:09.897 [2024-11-29 17:00:33.461084] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:25:09.897 request: 00:25:09.897 { 00:25:09.897 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:25:09.897 "secure_channel": false, 00:25:09.897 "listen_address": { 00:25:09.897 "trtype": "tcp", 00:25:09.897 "traddr": "127.0.0.1", 00:25:09.897 "trsvcid": "4420" 00:25:09.897 }, 00:25:09.897 "method": "nvmf_subsystem_add_listener", 00:25:09.897 "req_id": 1 00:25:09.897 } 00:25:09.897 Got JSON-RPC error response 00:25:09.897 response: 00:25:09.897 { 00:25:09.897 "code": -32602, 00:25:09.897 "message": "Invalid parameters" 00:25:09.897 } 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.897 17:00:33 keyring_file -- keyring/file.sh@47 -- # bperfpid=101750 00:25:09.897 17:00:33 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:25:09.897 17:00:33 keyring_file -- keyring/file.sh@49 -- # waitforlisten 101750 /var/tmp/bperf.sock 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 101750 ']' 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.897 17:00:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:09.897 [2024-11-29 17:00:33.530272] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:09.897 [2024-11-29 17:00:33.530746] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101750 ] 00:25:09.897 [2024-11-29 17:00:33.656271] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:09.897 [2024-11-29 17:00:33.685424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.157 [2024-11-29 17:00:33.709623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.157 [2024-11-29 17:00:33.742597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:10.157 17:00:33 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.157 17:00:33 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:25:10.157 17:00:33 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xBkN76vSMt 00:25:10.157 17:00:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xBkN76vSMt 00:25:10.416 17:00:34 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BuLnXbNtAr 00:25:10.416 17:00:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BuLnXbNtAr 00:25:10.675 17:00:34 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:25:10.675 17:00:34 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:25:10.675 17:00:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:10.675 17:00:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:10.675 17:00:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:10.934 17:00:34 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.xBkN76vSMt == \/\t\m\p\/\t\m\p\.\x\B\k\N\7\6\v\S\M\t ]] 00:25:10.934 17:00:34 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:25:10.934 17:00:34 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:25:10.934 17:00:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:10.934 17:00:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:10.934 17:00:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:11.194 17:00:34 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.BuLnXbNtAr == \/\t\m\p\/\t\m\p\.\B\u\L\n\X\b\N\t\A\r ]] 00:25:11.194 17:00:34 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:25:11.194 17:00:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:11.194 17:00:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:11.194 17:00:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:11.194 17:00:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.194 17:00:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:11.454 17:00:35 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:25:11.454 17:00:35 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:25:11.454 17:00:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:11.454 17:00:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:11.454 17:00:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:11.454 17:00:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:11.454 17:00:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.712 17:00:35 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:25:11.712 17:00:35 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:11.712 17:00:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:11.970 [2024-11-29 17:00:35.745580] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:12.229 nvme0n1 00:25:12.229 17:00:35 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:25:12.230 17:00:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:12.230 17:00:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:12.230 17:00:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:12.230 17:00:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:12.230 17:00:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:12.489 17:00:36 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:25:12.489 17:00:36 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:25:12.489 17:00:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:12.489 17:00:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:12.489 17:00:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:12.489 17:00:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:12.489 17:00:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:12.748 17:00:36 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:25:12.748 17:00:36 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:12.748 Running I/O for 1 seconds... 00:25:13.686 14058.00 IOPS, 54.91 MiB/s 00:25:13.686 Latency(us) 00:25:13.686 [2024-11-29T17:00:37.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.686 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:13.686 nvme0n1 : 1.01 14103.61 55.09 0.00 0.00 9053.25 4230.05 16920.20 00:25:13.686 [2024-11-29T17:00:37.478Z] =================================================================================================================== 00:25:13.686 [2024-11-29T17:00:37.478Z] Total : 14103.61 55.09 0.00 0.00 9053.25 4230.05 16920.20 00:25:13.686 { 00:25:13.686 "results": [ 00:25:13.686 { 00:25:13.686 "job": "nvme0n1", 00:25:13.686 "core_mask": "0x2", 00:25:13.686 "workload": "randrw", 00:25:13.686 "percentage": 50, 00:25:13.686 "status": "finished", 00:25:13.686 "queue_depth": 128, 00:25:13.686 "io_size": 4096, 00:25:13.686 "runtime": 1.005913, 00:25:13.686 "iops": 14103.6053813799, 00:25:13.686 "mibps": 55.09220852101524, 00:25:13.686 "io_failed": 0, 00:25:13.686 "io_timeout": 0, 00:25:13.686 "avg_latency_us": 9053.24845562839, 00:25:13.686 "min_latency_us": 4230.050909090909, 00:25:13.686 "max_latency_us": 16920.203636363636 00:25:13.686 } 00:25:13.686 ], 00:25:13.686 "core_count": 1 00:25:13.686 } 00:25:13.686 17:00:37 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:13.686 17:00:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:14.254 17:00:37 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:25:14.254 17:00:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:14.254 17:00:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:14.254 17:00:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:14.254 17:00:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:14.254 17:00:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:14.254 17:00:38 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:14.254 17:00:38 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:25:14.254 17:00:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:14.254 17:00:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:14.254 17:00:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:14.254 17:00:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:14.254 17:00:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:14.514 17:00:38 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:25:14.514 17:00:38 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:14.514 17:00:38 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:14.514 17:00:38 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:14.514 17:00:38 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:14.514 17:00:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.514 17:00:38 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:14.514 17:00:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.514 17:00:38 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:14.514 17:00:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:14.774 [2024-11-29 17:00:38.532910] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:14.774 [2024-11-29 17:00:38.533604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bcc50 (107): Transport endpoint is not connected 00:25:14.774 [2024-11-29 17:00:38.534596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bcc50 (9): Bad file descriptor 00:25:14.774 [2024-11-29 17:00:38.535593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:25:14.774 [2024-11-29 17:00:38.535614] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:14.774 [2024-11-29 17:00:38.535623] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:14.774 [2024-11-29 17:00:38.535634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:25:14.774 request: 00:25:14.774 { 00:25:14.774 "name": "nvme0", 00:25:14.774 "trtype": "tcp", 00:25:14.774 "traddr": "127.0.0.1", 00:25:14.774 "adrfam": "ipv4", 00:25:14.774 "trsvcid": "4420", 00:25:14.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:14.774 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:14.774 "prchk_reftag": false, 00:25:14.774 "prchk_guard": false, 00:25:14.774 "hdgst": false, 00:25:14.774 "ddgst": false, 00:25:14.774 "psk": "key1", 00:25:14.774 "allow_unrecognized_csi": false, 00:25:14.774 "method": "bdev_nvme_attach_controller", 00:25:14.774 "req_id": 1 00:25:14.774 } 00:25:14.774 Got JSON-RPC error response 00:25:14.774 response: 00:25:14.774 { 00:25:14.774 "code": -5, 00:25:14.774 "message": "Input/output error" 00:25:14.774 } 00:25:14.774 17:00:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:14.774 17:00:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:14.774 17:00:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:14.774 17:00:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:14.774 17:00:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:25:14.774 17:00:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:14.774 17:00:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:14.774 17:00:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:14.774 17:00:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:14.774 17:00:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:15.034 17:00:38 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:15.034 17:00:38 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:25:15.034 17:00:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:15.034 17:00:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:15.034 17:00:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:15.034 17:00:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:15.034 17:00:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:15.293 17:00:39 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:25:15.293 17:00:39 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:25:15.293 17:00:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:15.551 17:00:39 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:25:15.551 17:00:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:15.810 17:00:39 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:25:15.810 17:00:39 keyring_file -- keyring/file.sh@78 -- # jq length 00:25:15.810 17:00:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:16.069 17:00:39 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:25:16.069 17:00:39 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.xBkN76vSMt 00:25:16.069 17:00:39 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.xBkN76vSMt 00:25:16.069 17:00:39 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:16.069 17:00:39 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.xBkN76vSMt 00:25:16.069 17:00:39 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:16.069 17:00:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.069 17:00:39 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:16.069 17:00:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.069 17:00:39 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xBkN76vSMt 00:25:16.069 17:00:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xBkN76vSMt 00:25:16.329 [2024-11-29 17:00:39.993016] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xBkN76vSMt': 0100660 00:25:16.329 [2024-11-29 17:00:39.993053] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:16.329 request: 00:25:16.329 { 00:25:16.329 "name": "key0", 00:25:16.329 "path": "/tmp/tmp.xBkN76vSMt", 00:25:16.329 "method": "keyring_file_add_key", 00:25:16.329 "req_id": 1 00:25:16.329 } 00:25:16.329 Got JSON-RPC error response 00:25:16.329 response: 00:25:16.329 { 00:25:16.329 "code": -1, 00:25:16.329 "message": "Operation not permitted" 00:25:16.329 } 00:25:16.329 17:00:40 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:16.329 17:00:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:16.329 17:00:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:16.329 17:00:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:16.329 17:00:40 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.xBkN76vSMt 00:25:16.329 17:00:40 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xBkN76vSMt 00:25:16.329 17:00:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xBkN76vSMt 00:25:16.588 17:00:40 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.xBkN76vSMt 00:25:16.588 17:00:40 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:25:16.588 17:00:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:16.588 17:00:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:16.588 17:00:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:16.588 17:00:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:16.588 17:00:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:16.847 17:00:40 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:25:16.847 17:00:40 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:16.847 17:00:40 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:16.847 17:00:40 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:16.847 17:00:40 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:16.847 17:00:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.847 17:00:40 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:16.847 17:00:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.847 17:00:40 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:16.847 17:00:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:17.106 [2024-11-29 17:00:40.773168] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.xBkN76vSMt': No such file or directory 00:25:17.106 [2024-11-29 17:00:40.773204] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:17.106 [2024-11-29 17:00:40.773240] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:17.106 [2024-11-29 17:00:40.773248] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:25:17.106 [2024-11-29 17:00:40.773257] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:17.106 [2024-11-29 17:00:40.773264] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:17.106 request: 00:25:17.106 { 00:25:17.106 "name": "nvme0", 00:25:17.106 "trtype": "tcp", 00:25:17.106 "traddr": "127.0.0.1", 00:25:17.106 "adrfam": "ipv4", 00:25:17.106 "trsvcid": "4420", 00:25:17.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:17.106 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:17.106 "prchk_reftag": false, 00:25:17.106 "prchk_guard": false, 00:25:17.106 "hdgst": false, 00:25:17.106 "ddgst": false, 00:25:17.106 "psk": "key0", 00:25:17.106 "allow_unrecognized_csi": false, 00:25:17.106 "method": "bdev_nvme_attach_controller", 00:25:17.106 "req_id": 1 00:25:17.106 } 00:25:17.106 Got JSON-RPC error response 00:25:17.106 response: 00:25:17.106 { 00:25:17.106 "code": -19, 00:25:17.106 "message": "No such device" 00:25:17.106 } 00:25:17.106 17:00:40 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:17.106 17:00:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:17.106 17:00:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:17.106 17:00:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:17.106 17:00:40 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:25:17.106 17:00:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:17.365 17:00:41 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:17.365 17:00:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:17.365 17:00:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:17.365 17:00:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:17.365 17:00:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:17.365 17:00:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:17.365 17:00:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uNkNt8bjaN 00:25:17.365 17:00:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:17.365 17:00:41 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:17.365 17:00:41 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:25:17.365 17:00:41 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:17.365 17:00:41 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:25:17.365 17:00:41 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:25:17.365 17:00:41 keyring_file -- nvmf/common.sh@733 -- # python - 00:25:17.365 17:00:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uNkNt8bjaN 00:25:17.365 17:00:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uNkNt8bjaN 00:25:17.365 17:00:41 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.uNkNt8bjaN 00:25:17.365 17:00:41 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uNkNt8bjaN 00:25:17.365 17:00:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uNkNt8bjaN 00:25:17.625 17:00:41 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:17.625 17:00:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:17.884 nvme0n1 00:25:17.884 17:00:41 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:25:17.884 17:00:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:17.884 17:00:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:17.884 17:00:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:17.884 17:00:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:17.884 17:00:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:18.143 17:00:41 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:25:18.143 17:00:41 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:25:18.143 17:00:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:18.402 17:00:42 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:25:18.402 17:00:42 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:25:18.402 17:00:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:18.402 17:00:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:18.402 17:00:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:18.661 17:00:42 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:25:18.920 17:00:42 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:25:18.920 17:00:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:18.920 17:00:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:18.920 17:00:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:18.920 17:00:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:18.920 17:00:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:18.920 17:00:42 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:25:18.920 17:00:42 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:18.920 17:00:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:19.179 17:00:42 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:25:19.179 17:00:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:19.179 17:00:42 keyring_file -- keyring/file.sh@105 -- # jq length 00:25:19.438 17:00:43 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:25:19.438 17:00:43 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uNkNt8bjaN 00:25:19.438 17:00:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uNkNt8bjaN 00:25:19.698 17:00:43 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BuLnXbNtAr 00:25:19.698 17:00:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BuLnXbNtAr 00:25:19.957 17:00:43 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:19.957 17:00:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:20.216 nvme0n1 00:25:20.216 17:00:43 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:25:20.216 17:00:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:20.476 17:00:44 keyring_file -- keyring/file.sh@113 -- # config='{ 00:25:20.476 "subsystems": [ 00:25:20.476 { 00:25:20.476 "subsystem": "keyring", 00:25:20.476 "config": [ 00:25:20.476 { 00:25:20.476 "method": "keyring_file_add_key", 00:25:20.476 "params": { 00:25:20.476 "name": "key0", 00:25:20.476 "path": "/tmp/tmp.uNkNt8bjaN" 00:25:20.476 } 00:25:20.476 }, 00:25:20.476 { 00:25:20.476 "method": "keyring_file_add_key", 00:25:20.476 "params": { 00:25:20.476 "name": "key1", 00:25:20.476 "path": "/tmp/tmp.BuLnXbNtAr" 00:25:20.476 } 00:25:20.476 } 00:25:20.476 ] 00:25:20.476 }, 00:25:20.476 { 00:25:20.476 "subsystem": "iobuf", 00:25:20.476 "config": [ 00:25:20.476 { 00:25:20.476 "method": "iobuf_set_options", 00:25:20.476 "params": { 00:25:20.476 "small_pool_count": 8192, 00:25:20.476 "large_pool_count": 1024, 00:25:20.476 "small_bufsize": 8192, 00:25:20.476 "large_bufsize": 135168, 00:25:20.476 "enable_numa": false 00:25:20.476 } 00:25:20.476 } 00:25:20.476 ] 00:25:20.476 }, 00:25:20.476 { 00:25:20.476 "subsystem": "sock", 00:25:20.476 "config": [ 00:25:20.476 { 00:25:20.476 "method": "sock_set_default_impl", 00:25:20.476 "params": { 00:25:20.476 "impl_name": "uring" 00:25:20.476 } 00:25:20.476 }, 00:25:20.476 { 00:25:20.476 "method": "sock_impl_set_options", 00:25:20.476 "params": { 00:25:20.476 "impl_name": "ssl", 00:25:20.476 "recv_buf_size": 4096, 00:25:20.476 "send_buf_size": 4096, 00:25:20.476 "enable_recv_pipe": true, 00:25:20.476 "enable_quickack": false, 00:25:20.476 "enable_placement_id": 0, 00:25:20.476 "enable_zerocopy_send_server": true, 00:25:20.476 "enable_zerocopy_send_client": false, 00:25:20.476 "zerocopy_threshold": 0, 00:25:20.476 "tls_version": 0, 00:25:20.476 "enable_ktls": false 00:25:20.476 } 00:25:20.476 }, 00:25:20.476 { 00:25:20.476 "method": "sock_impl_set_options", 00:25:20.476 "params": { 00:25:20.476 "impl_name": "posix", 00:25:20.476 "recv_buf_size": 2097152, 00:25:20.476 "send_buf_size": 2097152, 00:25:20.476 "enable_recv_pipe": true, 00:25:20.476 "enable_quickack": false, 00:25:20.476 "enable_placement_id": 0, 00:25:20.476 "enable_zerocopy_send_server": true, 00:25:20.476 "enable_zerocopy_send_client": false, 00:25:20.476 "zerocopy_threshold": 0, 00:25:20.476 "tls_version": 0, 00:25:20.476 "enable_ktls": false 00:25:20.476 } 00:25:20.476 }, 00:25:20.476 { 00:25:20.476 "method": "sock_impl_set_options", 00:25:20.476 "params": { 00:25:20.476 "impl_name": "uring", 00:25:20.476 "recv_buf_size": 2097152, 00:25:20.476 "send_buf_size": 2097152, 00:25:20.476 "enable_recv_pipe": true, 00:25:20.476 "enable_quickack": false, 00:25:20.476 "enable_placement_id": 0, 00:25:20.476 "enable_zerocopy_send_server": false, 00:25:20.476 "enable_zerocopy_send_client": false, 00:25:20.476 "zerocopy_threshold": 0, 00:25:20.476 "tls_version": 0, 00:25:20.476 "enable_ktls": false 00:25:20.476 } 00:25:20.476 } 00:25:20.476 ] 00:25:20.476 }, 00:25:20.476 { 00:25:20.476 "subsystem": "vmd", 00:25:20.476 "config": [] 00:25:20.476 }, 00:25:20.476 { 00:25:20.476 "subsystem": "accel", 00:25:20.476 "config": [ 00:25:20.476 { 00:25:20.476 "method": "accel_set_options", 00:25:20.476 "params": { 00:25:20.476 "small_cache_size": 128, 00:25:20.476 "large_cache_size": 16, 00:25:20.476 "task_count": 2048, 00:25:20.476 "sequence_count": 2048, 00:25:20.476 "buf_count": 2048 00:25:20.476 } 00:25:20.476 } 00:25:20.476 ] 00:25:20.476 }, 00:25:20.476 { 00:25:20.476 "subsystem": "bdev", 00:25:20.476 "config": [ 00:25:20.476 { 00:25:20.476 "method": "bdev_set_options", 00:25:20.476 "params": { 00:25:20.476 "bdev_io_pool_size": 65535, 00:25:20.476 "bdev_io_cache_size": 256, 00:25:20.476 "bdev_auto_examine": true, 00:25:20.476 "iobuf_small_cache_size": 128, 00:25:20.476 "iobuf_large_cache_size": 16 00:25:20.476 } 00:25:20.476 }, 00:25:20.476 { 00:25:20.476 "method": "bdev_raid_set_options", 00:25:20.476 "params": { 00:25:20.476 "process_window_size_kb": 1024, 00:25:20.476 "process_max_bandwidth_mb_sec": 0 00:25:20.476 } 00:25:20.476 }, 00:25:20.476 { 00:25:20.476 "method": "bdev_iscsi_set_options", 00:25:20.476 "params": { 00:25:20.476 "timeout_sec": 30 00:25:20.476 } 00:25:20.476 }, 00:25:20.476 { 00:25:20.476 "method": "bdev_nvme_set_options", 00:25:20.476 "params": { 00:25:20.476 "action_on_timeout": "none", 00:25:20.476 "timeout_us": 0, 00:25:20.476 "timeout_admin_us": 0, 00:25:20.476 "keep_alive_timeout_ms": 10000, 00:25:20.476 "arbitration_burst": 0, 00:25:20.476 "low_priority_weight": 0, 00:25:20.476 "medium_priority_weight": 0, 00:25:20.476 "high_priority_weight": 0, 00:25:20.476 "nvme_adminq_poll_period_us": 10000, 00:25:20.476 "nvme_ioq_poll_period_us": 0, 00:25:20.476 "io_queue_requests": 512, 00:25:20.476 "delay_cmd_submit": true, 00:25:20.476 "transport_retry_count": 4, 00:25:20.476 "bdev_retry_count": 3, 00:25:20.476 "transport_ack_timeout": 0, 00:25:20.476 "ctrlr_loss_timeout_sec": 0, 00:25:20.476 "reconnect_delay_sec": 0, 00:25:20.476 "fast_io_fail_timeout_sec": 0, 00:25:20.476 "disable_auto_failback": false, 00:25:20.476 "generate_uuids": false, 00:25:20.476 "transport_tos": 0, 00:25:20.476 "nvme_error_stat": false, 00:25:20.476 "rdma_srq_size": 0, 00:25:20.476 "io_path_stat": false, 00:25:20.476 "allow_accel_sequence": false, 00:25:20.476 "rdma_max_cq_size": 0, 00:25:20.476 "rdma_cm_event_timeout_ms": 0, 00:25:20.476 "dhchap_digests": [ 00:25:20.476 "sha256", 00:25:20.476 "sha384", 00:25:20.476 "sha512" 00:25:20.476 ], 00:25:20.476 "dhchap_dhgroups": [ 00:25:20.476 "null", 00:25:20.476 "ffdhe2048", 00:25:20.476 "ffdhe3072", 00:25:20.476 "ffdhe4096", 00:25:20.476 "ffdhe6144", 00:25:20.476 "ffdhe8192" 00:25:20.476 ] 00:25:20.476 } 00:25:20.476 }, 00:25:20.476 { 00:25:20.476 "method": "bdev_nvme_attach_controller", 00:25:20.477 "params": { 00:25:20.477 "name": "nvme0", 00:25:20.477 "trtype": "TCP", 00:25:20.477 "adrfam": "IPv4", 00:25:20.477 "traddr": "127.0.0.1", 00:25:20.477 "trsvcid": "4420", 00:25:20.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:20.477 "prchk_reftag": false, 00:25:20.477 "prchk_guard": false, 00:25:20.477 "ctrlr_loss_timeout_sec": 0, 00:25:20.477 "reconnect_delay_sec": 0, 00:25:20.477 "fast_io_fail_timeout_sec": 0, 00:25:20.477 "psk": "key0", 00:25:20.477 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:20.477 "hdgst": false, 00:25:20.477 "ddgst": false, 00:25:20.477 "multipath": "multipath" 00:25:20.477 } 00:25:20.477 }, 00:25:20.477 { 00:25:20.477 "method": "bdev_nvme_set_hotplug", 00:25:20.477 "params": { 00:25:20.477 "period_us": 100000, 00:25:20.477 "enable": false 00:25:20.477 } 00:25:20.477 }, 00:25:20.477 { 00:25:20.477 "method": "bdev_wait_for_examine" 00:25:20.477 } 00:25:20.477 ] 00:25:20.477 }, 00:25:20.477 { 00:25:20.477 "subsystem": "nbd", 00:25:20.477 "config": [] 00:25:20.477 } 00:25:20.477 ] 00:25:20.477 }' 00:25:20.477 17:00:44 keyring_file -- keyring/file.sh@115 -- # killprocess 101750 00:25:20.477 17:00:44 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 101750 ']' 00:25:20.477 17:00:44 keyring_file -- common/autotest_common.sh@958 -- # kill -0 101750 00:25:20.477 17:00:44 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:20.477 17:00:44 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.477 17:00:44 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101750 00:25:20.477 killing process with pid 101750 00:25:20.477 Received shutdown signal, test time was about 1.000000 seconds 00:25:20.477 00:25:20.477 Latency(us) 00:25:20.477 [2024-11-29T17:00:44.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.477 [2024-11-29T17:00:44.269Z] =================================================================================================================== 00:25:20.477 [2024-11-29T17:00:44.269Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.477 17:00:44 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:20.477 17:00:44 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:20.477 17:00:44 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101750' 00:25:20.477 17:00:44 keyring_file -- common/autotest_common.sh@973 -- # kill 101750 00:25:20.477 17:00:44 keyring_file -- common/autotest_common.sh@978 -- # wait 101750 00:25:20.736 17:00:44 keyring_file -- keyring/file.sh@118 -- # bperfpid=101993 00:25:20.736 17:00:44 keyring_file -- keyring/file.sh@120 -- # waitforlisten 101993 /var/tmp/bperf.sock 00:25:20.736 17:00:44 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:20.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:20.736 17:00:44 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 101993 ']' 00:25:20.736 17:00:44 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:25:20.736 "subsystems": [ 00:25:20.736 { 00:25:20.737 "subsystem": "keyring", 00:25:20.737 "config": [ 00:25:20.737 { 00:25:20.737 "method": "keyring_file_add_key", 00:25:20.737 "params": { 00:25:20.737 "name": "key0", 00:25:20.737 "path": "/tmp/tmp.uNkNt8bjaN" 00:25:20.737 } 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "method": "keyring_file_add_key", 00:25:20.737 "params": { 00:25:20.737 "name": "key1", 00:25:20.737 "path": "/tmp/tmp.BuLnXbNtAr" 00:25:20.737 } 00:25:20.737 } 00:25:20.737 ] 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "subsystem": "iobuf", 00:25:20.737 "config": [ 00:25:20.737 { 00:25:20.737 "method": "iobuf_set_options", 00:25:20.737 "params": { 00:25:20.737 "small_pool_count": 8192, 00:25:20.737 "large_pool_count": 1024, 00:25:20.737 "small_bufsize": 8192, 00:25:20.737 "large_bufsize": 135168, 00:25:20.737 "enable_numa": false 00:25:20.737 } 00:25:20.737 } 00:25:20.737 ] 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "subsystem": "sock", 00:25:20.737 "config": [ 00:25:20.737 { 00:25:20.737 "method": "sock_set_default_impl", 00:25:20.737 "params": { 00:25:20.737 "impl_name": "uring" 00:25:20.737 } 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "method": "sock_impl_set_options", 00:25:20.737 "params": { 00:25:20.737 "impl_name": "ssl", 00:25:20.737 "recv_buf_size": 4096, 00:25:20.737 "send_buf_size": 4096, 00:25:20.737 "enable_recv_pipe": true, 00:25:20.737 "enable_quickack": false, 00:25:20.737 "enable_placement_id": 0, 00:25:20.737 "enable_zerocopy_send_server": true, 00:25:20.737 "enable_zerocopy_send_client": false, 00:25:20.737 "zerocopy_threshold": 0, 00:25:20.737 "tls_version": 0, 00:25:20.737 "enable_ktls": false 00:25:20.737 } 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "method": "sock_impl_set_options", 00:25:20.737 "params": { 00:25:20.737 "impl_name": "posix", 00:25:20.737 "recv_buf_size": 2097152, 00:25:20.737 "send_buf_size": 2097152, 00:25:20.737 "enable_recv_pipe": true, 00:25:20.737 "enable_quickack": false, 00:25:20.737 "enable_placement_id": 0, 00:25:20.737 "enable_zerocopy_send_server": true, 00:25:20.737 "enable_zerocopy_send_client": false, 00:25:20.737 "zerocopy_threshold": 0, 00:25:20.737 "tls_version": 0, 00:25:20.737 "enable_ktls": false 00:25:20.737 } 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "method": "sock_impl_set_options", 00:25:20.737 "params": { 00:25:20.737 "impl_name": "uring", 00:25:20.737 "recv_buf_size": 2097152, 00:25:20.737 "send_buf_size": 2097152, 00:25:20.737 "enable_recv_pipe": true, 00:25:20.737 "enable_quickack": false, 00:25:20.737 "enable_placement_id": 0, 00:25:20.737 "enable_zerocopy_send_server": false, 00:25:20.737 "enable_zerocopy_send_client": false, 00:25:20.737 "zerocopy_threshold": 0, 00:25:20.737 "tls_version": 0, 00:25:20.737 "enable_ktls": false 00:25:20.737 } 00:25:20.737 } 00:25:20.737 ] 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "subsystem": "vmd", 00:25:20.737 "config": [] 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "subsystem": "accel", 00:25:20.737 "config": [ 00:25:20.737 { 00:25:20.737 "method": "accel_set_options", 00:25:20.737 "params": { 00:25:20.737 "small_cache_size": 128, 00:25:20.737 "large_cache_size": 16, 00:25:20.737 "task_count": 2048, 00:25:20.737 "sequence_count": 2048, 00:25:20.737 "buf_count": 2048 00:25:20.737 } 00:25:20.737 } 00:25:20.737 ] 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "subsystem": "bdev", 00:25:20.737 "config": [ 00:25:20.737 { 00:25:20.737 "method": "bdev_set_options", 00:25:20.737 "params": { 00:25:20.737 "bdev_io_pool_size": 65535, 00:25:20.737 "bdev_io_cache_size": 256, 00:25:20.737 "bdev_auto_examine": true, 00:25:20.737 "iobuf_small_cache_size": 128, 00:25:20.737 "iobuf_large_cache_size": 16 00:25:20.737 } 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "method": "bdev_raid_set_options", 00:25:20.737 "params": { 00:25:20.737 "process_window_size_kb": 1024, 00:25:20.737 "process_max_bandwidth_mb_sec": 0 00:25:20.737 } 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "method": "bdev_iscsi_set_options", 00:25:20.737 "params": { 00:25:20.737 "timeout_sec": 30 00:25:20.737 } 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "method": "bdev_nvme_set_options", 00:25:20.737 "params": { 00:25:20.737 "action_on_timeout": "none", 00:25:20.737 "timeout_us": 0, 00:25:20.737 "timeout_admin_us": 0, 00:25:20.737 "keep_alive_timeout_ms": 10000, 00:25:20.737 "arbitration_burst": 0, 00:25:20.737 "low_priority_weight": 0, 00:25:20.737 "medium_priority_weight": 0, 00:25:20.737 "high_priority_weight": 0, 00:25:20.737 "nvme_adminq_poll_period_us": 10000, 00:25:20.737 "nvme_ioq_poll_period_us": 0, 00:25:20.737 "io_queue_requests": 512, 00:25:20.737 "delay_cmd_submit": true, 00:25:20.737 "transport_retry_count": 4, 00:25:20.737 "bdev_retry_count": 3, 00:25:20.737 "transport_ack_timeout": 0, 00:25:20.737 "ctrlr_loss_timeout_sec": 0, 00:25:20.737 "reconnect_delay_sec": 0, 00:25:20.737 "fast_io_fail_timeout_sec": 0, 00:25:20.737 "disable_auto_failback": false, 00:25:20.737 "generate_uuids": false, 00:25:20.737 "transport_tos": 0, 00:25:20.737 "nvme_error_stat": false, 00:25:20.737 "rdma_srq_size": 0, 00:25:20.737 "io_path_stat": false, 00:25:20.737 "allow_accel_sequence": false, 00:25:20.737 "rdma_max_cq_size": 0, 00:25:20.737 "rdma_cm_event_timeout_ms": 0, 00:25:20.737 "dhchap_digests": [ 00:25:20.737 "sha256", 00:25:20.737 "sha384", 00:25:20.737 "sha512" 00:25:20.737 ], 00:25:20.737 "dhchap_dhgroups": [ 00:25:20.737 "null", 00:25:20.737 "ffdhe2048", 00:25:20.737 "ffdhe3072", 00:25:20.737 "ffdhe4096", 00:25:20.737 "ffdhe6144", 00:25:20.737 "ffdhe8192" 00:25:20.737 ] 00:25:20.737 } 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "method": "bdev_nvme_attach_controller", 00:25:20.737 "params": { 00:25:20.737 "name": "nvme0", 00:25:20.737 "trtype": "TCP", 00:25:20.737 "adrfam": "IPv4", 00:25:20.737 "traddr": "127.0.0.1", 00:25:20.737 "trsvcid": "4420", 00:25:20.737 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:20.737 "prchk_reftag": false, 00:25:20.737 "prchk_guard": false, 00:25:20.737 "ctrlr_loss_timeout_sec": 0, 00:25:20.737 "reconnect_delay_sec": 0, 00:25:20.737 "fast_io_fail_timeout_sec": 0, 00:25:20.737 "psk": "key0", 00:25:20.737 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:20.737 "hdgst": false, 00:25:20.737 "ddgst": false, 00:25:20.737 "multipath": "multipath" 00:25:20.737 } 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "method": "bdev_nvme_set_hotplug", 00:25:20.737 "params": { 00:25:20.737 "period_us": 100000, 00:25:20.737 "enable": false 00:25:20.737 } 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "method": "bdev_wait_for_examine" 00:25:20.737 } 00:25:20.737 ] 00:25:20.737 }, 00:25:20.737 { 00:25:20.737 "subsystem": "nbd", 00:25:20.737 "config": [] 00:25:20.737 } 00:25:20.737 ] 00:25:20.737 }' 00:25:20.737 17:00:44 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:20.737 17:00:44 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.737 17:00:44 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:20.737 17:00:44 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.737 17:00:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:20.737 [2024-11-29 17:00:44.376722] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:20.737 [2024-11-29 17:00:44.376980] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101993 ] 00:25:20.737 [2024-11-29 17:00:44.496772] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:20.738 [2024-11-29 17:00:44.514967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.997 [2024-11-29 17:00:44.534452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.997 [2024-11-29 17:00:44.641490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:20.997 [2024-11-29 17:00:44.677690] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:21.563 17:00:45 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.563 17:00:45 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:25:21.563 17:00:45 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:25:21.563 17:00:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:21.563 17:00:45 keyring_file -- keyring/file.sh@121 -- # jq length 00:25:21.821 17:00:45 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:21.821 17:00:45 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:25:21.821 17:00:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:21.821 17:00:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:21.821 17:00:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:21.821 17:00:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:21.821 17:00:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:22.079 17:00:45 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:25:22.080 17:00:45 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:25:22.080 17:00:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:22.080 17:00:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:22.080 17:00:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:22.080 17:00:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:22.080 17:00:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:22.338 17:00:46 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:25:22.338 17:00:46 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:25:22.338 17:00:46 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:25:22.338 17:00:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:22.598 17:00:46 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:25:22.598 17:00:46 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:22.598 17:00:46 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.uNkNt8bjaN /tmp/tmp.BuLnXbNtAr 00:25:22.598 17:00:46 keyring_file -- keyring/file.sh@20 -- # killprocess 101993 00:25:22.598 17:00:46 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 101993 ']' 00:25:22.598 17:00:46 keyring_file -- common/autotest_common.sh@958 -- # kill -0 101993 00:25:22.598 17:00:46 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:22.598 17:00:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:22.598 17:00:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101993 00:25:22.598 killing process with pid 101993 00:25:22.598 Received shutdown signal, test time was about 1.000000 seconds 00:25:22.598 00:25:22.598 Latency(us) 00:25:22.598 [2024-11-29T17:00:46.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.598 [2024-11-29T17:00:46.390Z] =================================================================================================================== 00:25:22.598 [2024-11-29T17:00:46.390Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:22.598 17:00:46 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:22.598 17:00:46 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:22.598 17:00:46 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101993' 00:25:22.598 17:00:46 keyring_file -- common/autotest_common.sh@973 -- # kill 101993 00:25:22.598 17:00:46 keyring_file -- common/autotest_common.sh@978 -- # wait 101993 00:25:22.871 17:00:46 keyring_file -- keyring/file.sh@21 -- # killprocess 101746 00:25:22.871 17:00:46 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 101746 ']' 00:25:22.871 17:00:46 keyring_file -- common/autotest_common.sh@958 -- # kill -0 101746 00:25:22.871 17:00:46 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:22.871 17:00:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:22.871 17:00:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101746 00:25:22.871 killing process with pid 101746 00:25:22.871 17:00:46 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:22.871 17:00:46 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:22.871 17:00:46 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101746' 00:25:22.871 17:00:46 keyring_file -- common/autotest_common.sh@973 -- # kill 101746 00:25:22.871 17:00:46 keyring_file -- common/autotest_common.sh@978 -- # wait 101746 00:25:22.871 00:25:22.871 real 0m13.973s 00:25:22.871 user 0m36.323s 00:25:22.871 sys 0m2.551s 00:25:22.871 17:00:46 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.871 ************************************ 00:25:22.871 END TEST keyring_file 00:25:22.871 ************************************ 00:25:22.871 17:00:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:23.163 17:00:46 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:25:23.163 17:00:46 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:23.163 17:00:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:23.163 17:00:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:23.163 17:00:46 -- common/autotest_common.sh@10 -- # set +x 00:25:23.163 ************************************ 00:25:23.163 START TEST keyring_linux 00:25:23.163 ************************************ 00:25:23.163 17:00:46 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:23.163 Joined session keyring: 174705839 00:25:23.163 * Looking for test storage... 00:25:23.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:23.163 17:00:46 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:23.163 17:00:46 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:25:23.163 17:00:46 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:23.163 17:00:46 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@345 -- # : 1 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.163 17:00:46 keyring_linux -- scripts/common.sh@368 -- # return 0 00:25:23.163 17:00:46 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.163 17:00:46 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:23.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.163 --rc genhtml_branch_coverage=1 00:25:23.163 --rc genhtml_function_coverage=1 00:25:23.163 --rc genhtml_legend=1 00:25:23.163 --rc geninfo_all_blocks=1 00:25:23.163 --rc geninfo_unexecuted_blocks=1 00:25:23.163 00:25:23.163 ' 00:25:23.163 17:00:46 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:23.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.163 --rc genhtml_branch_coverage=1 00:25:23.163 --rc genhtml_function_coverage=1 00:25:23.163 --rc genhtml_legend=1 00:25:23.163 --rc geninfo_all_blocks=1 00:25:23.163 --rc geninfo_unexecuted_blocks=1 00:25:23.163 00:25:23.163 ' 00:25:23.163 17:00:46 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:23.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.163 --rc genhtml_branch_coverage=1 00:25:23.163 --rc genhtml_function_coverage=1 00:25:23.163 --rc genhtml_legend=1 00:25:23.163 --rc geninfo_all_blocks=1 00:25:23.163 --rc geninfo_unexecuted_blocks=1 00:25:23.163 00:25:23.163 ' 00:25:23.163 17:00:46 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:23.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.163 --rc genhtml_branch_coverage=1 00:25:23.163 --rc genhtml_function_coverage=1 00:25:23.163 --rc genhtml_legend=1 00:25:23.163 --rc geninfo_all_blocks=1 00:25:23.163 --rc geninfo_unexecuted_blocks=1 00:25:23.163 00:25:23.163 ' 00:25:23.163 17:00:46 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:23.163 17:00:46 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:23.163 17:00:46 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:23.163 17:00:46 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.163 17:00:46 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.163 17:00:46 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.163 17:00:46 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.163 17:00:46 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.163 17:00:46 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.163 17:00:46 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.163 17:00:46 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.163 17:00:46 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.163 17:00:46 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.163 17:00:46 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ecede086-b106-482f-ba49-ce4e74dc3f2b 00:25:23.163 17:00:46 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=ecede086-b106-482f-ba49-ce4e74dc3f2b 00:25:23.163 17:00:46 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:23.164 17:00:46 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.164 17:00:46 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.164 17:00:46 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.164 17:00:46 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.164 17:00:46 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.164 17:00:46 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.164 17:00:46 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.164 17:00:46 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:23.164 17:00:46 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:23.164 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:23.164 17:00:46 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:23.164 17:00:46 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:23.164 17:00:46 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:23.164 17:00:46 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:23.164 17:00:46 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:23.164 17:00:46 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:23.164 17:00:46 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:23.164 17:00:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:23.164 17:00:46 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:23.164 17:00:46 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:23.164 17:00:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:23.164 17:00:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:23.164 17:00:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:25:23.164 17:00:46 keyring_linux -- nvmf/common.sh@733 -- # python - 00:25:23.441 17:00:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:23.441 17:00:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:23.441 /tmp/:spdk-test:key0 00:25:23.441 17:00:46 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:23.441 17:00:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:23.441 17:00:46 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:23.441 17:00:46 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:23.441 17:00:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:23.441 17:00:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:23.441 17:00:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:23.441 17:00:46 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:23.441 17:00:46 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:25:23.442 17:00:46 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:23.442 17:00:46 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:25:23.442 17:00:46 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:25:23.442 17:00:46 keyring_linux -- nvmf/common.sh@733 -- # python - 00:25:23.442 17:00:47 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:23.442 /tmp/:spdk-test:key1 00:25:23.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.442 17:00:47 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:23.442 17:00:47 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=102115 00:25:23.442 17:00:47 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:23.442 17:00:47 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 102115 00:25:23.442 17:00:47 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 102115 ']' 00:25:23.442 17:00:47 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.442 17:00:47 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:23.442 17:00:47 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.442 17:00:47 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:23.442 17:00:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:23.442 [2024-11-29 17:00:47.093294] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:23.442 [2024-11-29 17:00:47.093628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102115 ] 00:25:23.442 [2024-11-29 17:00:47.219974] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:23.710 [2024-11-29 17:00:47.247787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.710 [2024-11-29 17:00:47.266416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.710 [2024-11-29 17:00:47.298827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:24.278 17:00:48 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:24.278 17:00:48 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:25:24.278 17:00:48 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:24.278 17:00:48 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.278 17:00:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:24.278 [2024-11-29 17:00:48.027572] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.278 null0 00:25:24.278 [2024-11-29 17:00:48.059551] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:24.278 [2024-11-29 17:00:48.059745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:24.538 17:00:48 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.538 17:00:48 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:24.538 701879290 00:25:24.538 17:00:48 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:24.538 924771944 00:25:24.538 17:00:48 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=102133 00:25:24.538 17:00:48 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:24.538 17:00:48 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 102133 /var/tmp/bperf.sock 00:25:24.538 17:00:48 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 102133 ']' 00:25:24.538 17:00:48 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:24.538 17:00:48 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.538 17:00:48 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:24.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:24.538 17:00:48 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.538 17:00:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:24.538 [2024-11-29 17:00:48.142428] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.11.0-rc4 initialization... 00:25:24.538 [2024-11-29 17:00:48.142522] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102133 ] 00:25:24.538 [2024-11-29 17:00:48.267231] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:24.538 [2024-11-29 17:00:48.299806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.538 [2024-11-29 17:00:48.323282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.797 17:00:48 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:24.797 17:00:48 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:25:24.797 17:00:48 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:24.797 17:00:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:25.056 17:00:48 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:25.056 17:00:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:25.056 [2024-11-29 17:00:48.847045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:25.315 17:00:48 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:25.316 17:00:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:25.575 [2024-11-29 17:00:49.155366] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:25.575 nvme0n1 00:25:25.575 17:00:49 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:25.575 17:00:49 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:25.575 17:00:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:25.575 17:00:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:25.575 17:00:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:25.575 17:00:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:25.833 17:00:49 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:25.833 17:00:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:25.833 17:00:49 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:25.833 17:00:49 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:25.833 17:00:49 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:25.833 17:00:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:25.833 17:00:49 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:26.092 17:00:49 keyring_linux -- keyring/linux.sh@25 -- # sn=701879290 00:25:26.092 17:00:49 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:26.092 17:00:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:26.092 17:00:49 keyring_linux -- keyring/linux.sh@26 -- # [[ 701879290 == \7\0\1\8\7\9\2\9\0 ]] 00:25:26.092 17:00:49 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 701879290 00:25:26.092 17:00:49 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:26.092 17:00:49 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:26.092 Running I/O for 1 seconds... 00:25:27.029 14416.00 IOPS, 56.31 MiB/s 00:25:27.029 Latency(us) 00:25:27.029 [2024-11-29T17:00:50.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.029 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:27.029 nvme0n1 : 1.01 14418.69 56.32 0.00 0.00 8835.01 3455.53 11856.06 00:25:27.029 [2024-11-29T17:00:50.821Z] =================================================================================================================== 00:25:27.029 [2024-11-29T17:00:50.821Z] Total : 14418.69 56.32 0.00 0.00 8835.01 3455.53 11856.06 00:25:27.029 { 00:25:27.029 "results": [ 00:25:27.029 { 00:25:27.029 "job": "nvme0n1", 00:25:27.030 "core_mask": "0x2", 00:25:27.030 "workload": "randread", 00:25:27.030 "status": "finished", 00:25:27.030 "queue_depth": 128, 00:25:27.030 "io_size": 4096, 00:25:27.030 "runtime": 1.00876, 00:25:27.030 "iops": 14418.692255838852, 00:25:27.030 "mibps": 56.32301662437052, 00:25:27.030 "io_failed": 0, 00:25:27.030 "io_timeout": 0, 00:25:27.030 "avg_latency_us": 8835.014270195945, 00:25:27.030 "min_latency_us": 3455.5345454545454, 00:25:27.030 "max_latency_us": 11856.058181818182 00:25:27.030 } 00:25:27.030 ], 00:25:27.030 "core_count": 1 00:25:27.030 } 00:25:27.289 17:00:50 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:27.289 17:00:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:27.289 17:00:51 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:27.289 17:00:51 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:27.289 17:00:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:27.289 17:00:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:27.289 17:00:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:27.289 17:00:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:27.857 17:00:51 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:27.857 17:00:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:27.857 17:00:51 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:27.857 17:00:51 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:27.857 17:00:51 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:25:27.857 17:00:51 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:27.857 17:00:51 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:27.857 17:00:51 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:27.857 17:00:51 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:27.857 17:00:51 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:27.857 17:00:51 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:27.858 17:00:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:27.858 [2024-11-29 17:00:51.602562] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:27.858 [2024-11-29 17:00:51.602701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1971a00 (107): Transport endpoint is not connected 00:25:27.858 [2024-11-29 17:00:51.603694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1971a00 (9): Bad file descriptor 00:25:27.858 [2024-11-29 17:00:51.604692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:25:27.858 [2024-11-29 17:00:51.604721] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:27.858 [2024-11-29 17:00:51.604733] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:27.858 [2024-11-29 17:00:51.604745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:25:27.858 request: 00:25:27.858 { 00:25:27.858 "name": "nvme0", 00:25:27.858 "trtype": "tcp", 00:25:27.858 "traddr": "127.0.0.1", 00:25:27.858 "adrfam": "ipv4", 00:25:27.858 "trsvcid": "4420", 00:25:27.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:27.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:27.858 "prchk_reftag": false, 00:25:27.858 "prchk_guard": false, 00:25:27.858 "hdgst": false, 00:25:27.858 "ddgst": false, 00:25:27.858 "psk": ":spdk-test:key1", 00:25:27.858 "allow_unrecognized_csi": false, 00:25:27.858 "method": "bdev_nvme_attach_controller", 00:25:27.858 "req_id": 1 00:25:27.858 } 00:25:27.858 Got JSON-RPC error response 00:25:27.858 response: 00:25:27.858 { 00:25:27.858 "code": -5, 00:25:27.858 "message": "Input/output error" 00:25:27.858 } 00:25:27.858 17:00:51 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:25:27.858 17:00:51 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:27.858 17:00:51 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:27.858 17:00:51 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@33 -- # sn=701879290 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 701879290 00:25:27.858 1 links removed 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@33 -- # sn=924771944 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 924771944 00:25:27.858 1 links removed 00:25:27.858 17:00:51 keyring_linux -- keyring/linux.sh@41 -- # killprocess 102133 00:25:27.858 17:00:51 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 102133 ']' 00:25:27.858 17:00:51 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 102133 00:25:27.858 17:00:51 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:25:27.858 17:00:51 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.858 17:00:51 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102133 00:25:28.117 killing process with pid 102133 00:25:28.117 Received shutdown signal, test time was about 1.000000 seconds 00:25:28.117 00:25:28.117 Latency(us) 00:25:28.117 [2024-11-29T17:00:51.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.117 [2024-11-29T17:00:51.909Z] =================================================================================================================== 00:25:28.117 [2024-11-29T17:00:51.909Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102133' 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@973 -- # kill 102133 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@978 -- # wait 102133 00:25:28.117 17:00:51 keyring_linux -- keyring/linux.sh@42 -- # killprocess 102115 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 102115 ']' 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 102115 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102115 00:25:28.117 killing process with pid 102115 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102115' 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@973 -- # kill 102115 00:25:28.117 17:00:51 keyring_linux -- common/autotest_common.sh@978 -- # wait 102115 00:25:28.377 ************************************ 00:25:28.377 END TEST keyring_linux 00:25:28.377 ************************************ 00:25:28.377 00:25:28.377 real 0m5.305s 00:25:28.377 user 0m10.305s 00:25:28.377 sys 0m1.306s 00:25:28.377 17:00:52 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:28.377 17:00:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:28.377 17:00:52 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:28.377 17:00:52 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:28.377 17:00:52 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:28.377 17:00:52 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:28.377 17:00:52 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:28.377 17:00:52 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:28.377 17:00:52 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:28.377 17:00:52 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:28.377 17:00:52 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:25:28.377 17:00:52 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:28.377 17:00:52 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:25:28.377 17:00:52 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:28.377 17:00:52 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:28.377 17:00:52 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:25:28.377 17:00:52 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:25:28.377 17:00:52 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:25:28.377 17:00:52 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:25:28.377 17:00:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:28.377 17:00:52 -- common/autotest_common.sh@10 -- # set +x 00:25:28.377 17:00:52 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:25:28.377 17:00:52 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:25:28.377 17:00:52 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:25:28.377 17:00:52 -- common/autotest_common.sh@10 -- # set +x 00:25:30.285 INFO: APP EXITING 00:25:30.285 INFO: killing all VMs 00:25:30.285 INFO: killing vhost app 00:25:30.285 INFO: EXIT DONE 00:25:30.852 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:30.852 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:30.852 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:31.799 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:31.799 Cleaning 00:25:31.799 Removing: /var/run/dpdk/spdk0/config 00:25:31.799 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:31.799 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:31.799 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:31.799 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:31.799 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:31.799 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:31.799 Removing: /var/run/dpdk/spdk1/config 00:25:31.799 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:31.799 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:31.799 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:31.799 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:31.799 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:31.799 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:31.799 Removing: /var/run/dpdk/spdk2/config 00:25:31.799 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:31.799 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:31.799 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:31.799 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:31.799 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:31.799 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:31.799 Removing: /var/run/dpdk/spdk3/config 00:25:31.799 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:31.799 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:31.799 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:31.799 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:31.799 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:31.799 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:31.799 Removing: /var/run/dpdk/spdk4/config 00:25:31.799 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:31.799 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:31.799 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:31.799 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:31.799 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:31.799 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:31.799 Removing: /dev/shm/nvmf_trace.0 00:25:31.799 Removing: /dev/shm/spdk_tgt_trace.pid70872 00:25:31.799 Removing: /var/run/dpdk/spdk0 00:25:31.799 Removing: /var/run/dpdk/spdk1 00:25:31.799 Removing: /var/run/dpdk/spdk2 00:25:31.799 Removing: /var/run/dpdk/spdk3 00:25:31.799 Removing: /var/run/dpdk/spdk4 00:25:31.799 Removing: /var/run/dpdk/spdk_pid100091 00:25:31.799 Removing: /var/run/dpdk/spdk_pid100199 00:25:31.799 Removing: /var/run/dpdk/spdk_pid100891 00:25:31.799 Removing: /var/run/dpdk/spdk_pid100922 00:25:31.799 Removing: /var/run/dpdk/spdk_pid100963 00:25:31.799 Removing: /var/run/dpdk/spdk_pid101207 00:25:31.799 Removing: /var/run/dpdk/spdk_pid101242 00:25:31.799 Removing: /var/run/dpdk/spdk_pid101274 00:25:31.800 Removing: /var/run/dpdk/spdk_pid101746 00:25:31.800 Removing: /var/run/dpdk/spdk_pid101750 00:25:31.800 Removing: /var/run/dpdk/spdk_pid101993 00:25:31.800 Removing: /var/run/dpdk/spdk_pid102115 00:25:31.800 Removing: /var/run/dpdk/spdk_pid102133 00:25:31.800 Removing: /var/run/dpdk/spdk_pid70721 00:25:31.800 Removing: /var/run/dpdk/spdk_pid70872 00:25:31.800 Removing: /var/run/dpdk/spdk_pid71065 00:25:31.800 Removing: /var/run/dpdk/spdk_pid71146 00:25:31.800 Removing: /var/run/dpdk/spdk_pid71166 00:25:31.800 Removing: /var/run/dpdk/spdk_pid71276 00:25:31.800 Removing: /var/run/dpdk/spdk_pid71286 00:25:31.800 Removing: /var/run/dpdk/spdk_pid71420 00:25:31.800 Removing: /var/run/dpdk/spdk_pid71616 00:25:31.800 Removing: /var/run/dpdk/spdk_pid71770 00:25:31.800 Removing: /var/run/dpdk/spdk_pid71848 00:25:31.800 Removing: /var/run/dpdk/spdk_pid71919 00:25:31.800 Removing: /var/run/dpdk/spdk_pid72005 00:25:31.800 Removing: /var/run/dpdk/spdk_pid72077 00:25:31.800 Removing: /var/run/dpdk/spdk_pid72110 00:25:31.800 Removing: /var/run/dpdk/spdk_pid72145 00:25:31.800 Removing: /var/run/dpdk/spdk_pid72215 00:25:31.800 Removing: /var/run/dpdk/spdk_pid72301 00:25:31.800 Removing: /var/run/dpdk/spdk_pid72746 00:25:31.800 Removing: /var/run/dpdk/spdk_pid72787 00:25:31.800 Removing: /var/run/dpdk/spdk_pid72838 00:25:31.800 Removing: /var/run/dpdk/spdk_pid72847 00:25:31.800 Removing: /var/run/dpdk/spdk_pid72909 00:25:31.800 Removing: /var/run/dpdk/spdk_pid72925 00:25:31.800 Removing: /var/run/dpdk/spdk_pid72979 00:25:31.800 Removing: /var/run/dpdk/spdk_pid72995 00:25:31.800 Removing: /var/run/dpdk/spdk_pid73041 00:25:31.800 Removing: /var/run/dpdk/spdk_pid73059 00:25:31.800 Removing: /var/run/dpdk/spdk_pid73099 00:25:31.800 Removing: /var/run/dpdk/spdk_pid73117 00:25:31.800 Removing: /var/run/dpdk/spdk_pid73247 00:25:31.800 Removing: /var/run/dpdk/spdk_pid73277 00:25:31.800 Removing: /var/run/dpdk/spdk_pid73360 00:25:31.800 Removing: /var/run/dpdk/spdk_pid73686 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73704 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73735 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73748 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73764 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73783 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73791 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73812 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73825 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73839 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73854 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73873 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73887 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73902 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73921 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73935 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73945 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73964 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73983 00:25:32.059 Removing: /var/run/dpdk/spdk_pid73993 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74029 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74037 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74072 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74133 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74167 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74171 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74205 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74209 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74211 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74259 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74267 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74301 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74305 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74309 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74324 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74328 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74339 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74347 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74351 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74385 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74406 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74421 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74444 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74448 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74461 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74496 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74512 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74534 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74536 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74549 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74551 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74559 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74566 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74568 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74581 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74658 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74699 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74807 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74842 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74885 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74900 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74916 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74931 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74962 00:25:32.059 Removing: /var/run/dpdk/spdk_pid74978 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75050 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75066 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75110 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75166 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75216 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75240 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75334 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75375 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75412 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75633 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75725 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75759 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75783 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75821 00:25:32.059 Removing: /var/run/dpdk/spdk_pid75850 00:25:32.319 Removing: /var/run/dpdk/spdk_pid75878 00:25:32.319 Removing: /var/run/dpdk/spdk_pid75915 00:25:32.319 Removing: /var/run/dpdk/spdk_pid76296 00:25:32.319 Removing: /var/run/dpdk/spdk_pid76336 00:25:32.319 Removing: /var/run/dpdk/spdk_pid76675 00:25:32.319 Removing: /var/run/dpdk/spdk_pid77142 00:25:32.319 Removing: /var/run/dpdk/spdk_pid77419 00:25:32.319 Removing: /var/run/dpdk/spdk_pid78267 00:25:32.319 Removing: /var/run/dpdk/spdk_pid79185 00:25:32.319 Removing: /var/run/dpdk/spdk_pid79301 00:25:32.319 Removing: /var/run/dpdk/spdk_pid79371 00:25:32.319 Removing: /var/run/dpdk/spdk_pid80774 00:25:32.319 Removing: /var/run/dpdk/spdk_pid81082 00:25:32.319 Removing: /var/run/dpdk/spdk_pid84774 00:25:32.319 Removing: /var/run/dpdk/spdk_pid85132 00:25:32.319 Removing: /var/run/dpdk/spdk_pid85244 00:25:32.319 Removing: /var/run/dpdk/spdk_pid85371 00:25:32.319 Removing: /var/run/dpdk/spdk_pid85392 00:25:32.319 Removing: /var/run/dpdk/spdk_pid85421 00:25:32.319 Removing: /var/run/dpdk/spdk_pid85442 00:25:32.319 Removing: /var/run/dpdk/spdk_pid85532 00:25:32.319 Removing: /var/run/dpdk/spdk_pid85660 00:25:32.319 Removing: /var/run/dpdk/spdk_pid85796 00:25:32.319 Removing: /var/run/dpdk/spdk_pid85870 00:25:32.319 Removing: /var/run/dpdk/spdk_pid86051 00:25:32.319 Removing: /var/run/dpdk/spdk_pid86120 00:25:32.319 Removing: /var/run/dpdk/spdk_pid86201 00:25:32.319 Removing: /var/run/dpdk/spdk_pid86558 00:25:32.319 Removing: /var/run/dpdk/spdk_pid86971 00:25:32.319 Removing: /var/run/dpdk/spdk_pid86972 00:25:32.319 Removing: /var/run/dpdk/spdk_pid86973 00:25:32.319 Removing: /var/run/dpdk/spdk_pid87227 00:25:32.319 Removing: /var/run/dpdk/spdk_pid87471 00:25:32.319 Removing: /var/run/dpdk/spdk_pid87473 00:25:32.319 Removing: /var/run/dpdk/spdk_pid89797 00:25:32.319 Removing: /var/run/dpdk/spdk_pid90179 00:25:32.319 Removing: /var/run/dpdk/spdk_pid90181 00:25:32.319 Removing: /var/run/dpdk/spdk_pid90498 00:25:32.319 Removing: /var/run/dpdk/spdk_pid90518 00:25:32.319 Removing: /var/run/dpdk/spdk_pid90532 00:25:32.319 Removing: /var/run/dpdk/spdk_pid90562 00:25:32.319 Removing: /var/run/dpdk/spdk_pid90573 00:25:32.319 Removing: /var/run/dpdk/spdk_pid90658 00:25:32.319 Removing: /var/run/dpdk/spdk_pid90666 00:25:32.319 Removing: /var/run/dpdk/spdk_pid90774 00:25:32.319 Removing: /var/run/dpdk/spdk_pid90776 00:25:32.319 Removing: /var/run/dpdk/spdk_pid90884 00:25:32.319 Removing: /var/run/dpdk/spdk_pid90886 00:25:32.319 Removing: /var/run/dpdk/spdk_pid91332 00:25:32.319 Removing: /var/run/dpdk/spdk_pid91382 00:25:32.319 Removing: /var/run/dpdk/spdk_pid91486 00:25:32.319 Removing: /var/run/dpdk/spdk_pid91570 00:25:32.319 Removing: /var/run/dpdk/spdk_pid91909 00:25:32.319 Removing: /var/run/dpdk/spdk_pid92098 00:25:32.319 Removing: /var/run/dpdk/spdk_pid92513 00:25:32.319 Removing: /var/run/dpdk/spdk_pid93067 00:25:32.319 Removing: /var/run/dpdk/spdk_pid93917 00:25:32.319 Removing: /var/run/dpdk/spdk_pid94540 00:25:32.319 Removing: /var/run/dpdk/spdk_pid94548 00:25:32.319 Removing: /var/run/dpdk/spdk_pid96571 00:25:32.319 Removing: /var/run/dpdk/spdk_pid96627 00:25:32.319 Removing: /var/run/dpdk/spdk_pid96674 00:25:32.319 Removing: /var/run/dpdk/spdk_pid96730 00:25:32.319 Removing: /var/run/dpdk/spdk_pid96838 00:25:32.319 Removing: /var/run/dpdk/spdk_pid96885 00:25:32.319 Removing: /var/run/dpdk/spdk_pid96939 00:25:32.319 Removing: /var/run/dpdk/spdk_pid96994 00:25:32.319 Removing: /var/run/dpdk/spdk_pid97351 00:25:32.319 Removing: /var/run/dpdk/spdk_pid98561 00:25:32.319 Removing: /var/run/dpdk/spdk_pid98694 00:25:32.319 Removing: /var/run/dpdk/spdk_pid98924 00:25:32.319 Removing: /var/run/dpdk/spdk_pid99512 00:25:32.319 Removing: /var/run/dpdk/spdk_pid99667 00:25:32.319 Removing: /var/run/dpdk/spdk_pid99824 00:25:32.319 Removing: /var/run/dpdk/spdk_pid99921 00:25:32.319 Clean 00:25:32.578 17:00:56 -- common/autotest_common.sh@1453 -- # return 0 00:25:32.578 17:00:56 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:25:32.578 17:00:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:32.578 17:00:56 -- common/autotest_common.sh@10 -- # set +x 00:25:32.578 17:00:56 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:25:32.578 17:00:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:32.578 17:00:56 -- common/autotest_common.sh@10 -- # set +x 00:25:32.578 17:00:56 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:32.578 17:00:56 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:32.578 17:00:56 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:32.578 17:00:56 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:25:32.578 17:00:56 -- spdk/autotest.sh@398 -- # hostname 00:25:32.578 17:00:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:32.837 geninfo: WARNING: invalid characters removed from testname! 00:25:54.774 17:01:17 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:58.060 17:01:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:59.963 17:01:23 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:02.497 17:01:26 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:05.033 17:01:28 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:06.939 17:01:30 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:09.488 17:01:32 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:09.488 17:01:32 -- spdk/autorun.sh@1 -- $ timing_finish 00:26:09.488 17:01:32 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:26:09.488 17:01:32 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:09.488 17:01:32 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:09.488 17:01:32 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:09.488 + [[ -n 5988 ]] 00:26:09.488 + sudo kill 5988 00:26:09.498 [Pipeline] } 00:26:09.514 [Pipeline] // timeout 00:26:09.520 [Pipeline] } 00:26:09.534 [Pipeline] // stage 00:26:09.539 [Pipeline] } 00:26:09.553 [Pipeline] // catchError 00:26:09.563 [Pipeline] stage 00:26:09.565 [Pipeline] { (Stop VM) 00:26:09.578 [Pipeline] sh 00:26:09.861 + vagrant halt 00:26:13.151 ==> default: Halting domain... 00:26:19.837 [Pipeline] sh 00:26:20.122 + vagrant destroy -f 00:26:23.423 ==> default: Removing domain... 00:26:23.436 [Pipeline] sh 00:26:23.717 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:26:23.726 [Pipeline] } 00:26:23.741 [Pipeline] // stage 00:26:23.747 [Pipeline] } 00:26:23.762 [Pipeline] // dir 00:26:23.768 [Pipeline] } 00:26:23.782 [Pipeline] // wrap 00:26:23.789 [Pipeline] } 00:26:23.801 [Pipeline] // catchError 00:26:23.811 [Pipeline] stage 00:26:23.814 [Pipeline] { (Epilogue) 00:26:23.829 [Pipeline] sh 00:26:24.111 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:29.398 [Pipeline] catchError 00:26:29.400 [Pipeline] { 00:26:29.414 [Pipeline] sh 00:26:29.696 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:29.955 Artifacts sizes are good 00:26:29.964 [Pipeline] } 00:26:29.979 [Pipeline] // catchError 00:26:29.991 [Pipeline] archiveArtifacts 00:26:29.998 Archiving artifacts 00:26:30.120 [Pipeline] cleanWs 00:26:30.133 [WS-CLEANUP] Deleting project workspace... 00:26:30.133 [WS-CLEANUP] Deferred wipeout is used... 00:26:30.140 [WS-CLEANUP] done 00:26:30.142 [Pipeline] } 00:26:30.158 [Pipeline] // stage 00:26:30.163 [Pipeline] } 00:26:30.178 [Pipeline] // node 00:26:30.183 [Pipeline] End of Pipeline 00:26:30.228 Finished: SUCCESS